Science Policy For All

Because science policy affects everyone.

Science Policy Around the Web January 28th, 2020

leave a comment »

By Hannah King PhD

Source: Flickr

Containing new coronavirus may not be feasible, experts say, as they warn of possible sustained global spread

The Wuhan Coronavirus has come to American soil, with 5 cases identified throughout the U.S. on top of the more than 2,000 cases confirmed and 82 deaths in China. These infections have put a spotlight on the strategies that government and public health organizations put in place to limit the spread of novel infectious diseases, the successes these measures can achieve and the impossibility of ever being fully prepared.

Chinese authorities have put strict measures into place to try to contain the outbreak, including postponing the return to work following the Chinese Lunar New Year Holiday across China, and preventing transport—flights, trains, buses and cars—from leaving Wuhan, the most affected provence.

Other countries, including the US, have also put screening measures into place. At-risk airports have implemented temperature screens, and hospitals are on high alert and rapidly adapting their screening procedures to identify individuals suspected of carrying the disease. This has resulted in prompt identification of cases in the US and patients have been isolated prior to confirmation of infection. According to health officials the reactions to these cases have been an “example of how it should be done”.

The global scientific effort has also been praised. The virus was described less than a month after cases were first reported, and the first scientific papers have already been released. A comment in the journal The Lancet by epidemiologist David Heymann also praised the rapid peer review and free sharing of information which has enabled a “global collaboration” to better understand and prevent transmission of this disease.

This containment has resulted in the World Heath Organisation opting on Thursday to not declare this disease a Public Health Emergency of International Concern (PHEIC). This largely reflects the relatively small scale of spread outside of China, including no reported cases of transmission. According to Didier Houssin, the chair of the WHO emergency committee assessing the Wuhan Coronavirus, it is also due to “the efforts presently made by Chinese authorities in order to contain the disease.”

However, experts are warning that this may be insufficient to prevent spread of this disease. Dr. Allison McGeer, an infectious disease specialist from Toronto, has suggested that “the more we learn about it, the greater the possibility is that transmission will not be able to be controlled with public health measures”. Professor Neil Ferguson, a public health expert from Imperial College, London, suggested that there may already be as many as 100,000 cases in China

The virus also has properties that may make containment more of a challenge. It has a shorter incubation period than other coronaviruses such as SARS, making it more difficult to track potentially infected persons before they can potentially infect others. It has also emerged that the virus may not cause symptoms in all infected individuals. It is still unclear whether asymptomatic individuals are able to transmit the virus, however if this were possible the effectiveness of public heath tools such as quarantine and isolation to stop viral spread would be greatly reduced.

A further concern in the US is the need for samples in the US to be tested for the coronavirus by the CDC. Kelly Wroblewski, from the American Public Health Laboratories, has sad that if the virus started to spread widely in the US this may “overwhelm” a single testing location, and recommended decentralizing this process.

While many containment policies have been enacted here, experts such as Dr. Trevor Bedford, a computational biologist at the Fred Hutchinson Cancer Research Center, cautions that with the current infection rate of the new virus “if it’s not contained shortly, I think we are looking at a pandemic”.

(Helen Branswell, Stat News

Written by sciencepolicyforall

January 28, 2020 at 4:43 pm

Targeting the spread of unregulated Stem Cell and regenerative therapies

leave a comment »

By: Kellsye Fabian, PhD

Image by Darko Stojanovic from Pixabay

Advances in regenerative medicine research have generated significant public interest in therapies that have the potential to restore the normal function of cells, tissues and organs that have been damaged by age, disease, or trauma. Investment and enthusiasm in this field have propelled the development of regenerative therapies such as cell therapy, bioengineered tissue products, and gene therapy. While several hundreds of these treatments have progressed to clinical trials, the Food and Drug Administration (FDA) has approved only a few regenerative therapies. Of these, most are stem cell-based products derived from umbilical cord blood used to treat blood cancers and immune disorders, and three are gene therapies to treat cancer or blindness. 

Alarmingly, an increasing number of businesses and for-profit clinics have been marketing regenerative therapies, mostly stem cell products, that have not been reviewed by the FDA. In 2016, there were 351 stem cell businesses offering interventions that could be administered in 570 clinics. That number was estimated to have doubled in 2018. Most of these establishments tout that their products can treat or cure serious illnesses and/or provide a wide range of benefits. These claims are often unsubstantiated. Moreover, these unapproved interventions pose a great danger to patients and have resulted in serious complications including blindness, infections, cardiovascular complications, cancer and death.

 Some patients remain willing to take the risks, especially those with serious diseases who have exhausted all possible conventional treatment or those that may be searching for alternative therapies. These individuals often fall prey to the overly optimistic portrayals of stem cell products in the media advertisements from stem cell companies. 

For years, these unscrupulous businesses have avoided heavy regulations. Physicians, researchers and ethicists, have urged for stricter monitoring of regenerative therapies as the commercial activity related to these interventions expanded. In response, the FDA has increased its oversight of the field and has issued guidance relating to the regulation of human cells, tissues and cellular or tissue-based products (HCT/Ps) to ensure that commercialized regenerative therapies are safe and are founded on scientific evidence.

FDA increased oversight 

Since 2017, the FDA has increased oversight and enforcement of regulations against unscrupulous providers of stem cell products. In 2018, the FDA sought permanent injunctions against two stem cell clinics, California Stem Cell Treatment Center Inc and Florida-based US Stem Cell Clinic LLC, for selling unapproved stem cell products and for significantly deviating from current good manufacturing practice requirements that ensure the sterility of biological products. 

The case against California Stem Cell Treatment Centers began in August 2017, when the US Marshals Service, on behalf of the FDA, seized five vials of smallpox virus vaccine from a clinic affiliated with California Stem Cell Treatment Centers. The vaccine was provided by a company called StemImmune and was being combined with stromal vascular fraction (SVF), which are cells derived from patient adipose (fat) tissues that consists of a variety of cells, including a small number of mesenchymal stem cells. This combined product was then administered to cancer patients in California Stem Cell Treatment Centers intravenously or through direct injection into patients’ tumors. 

Cancer patients have potentially compromised immune systems and the use of a vaccine in this manner could pose great risks, such as inflammation and swelling of the heart and surrounding tissues, to the patients. In addition, California Stem Cell Treatment Center provided unapproved treatments to patients with arthritis, stroke, ALS, multiple sclerosis, macular degeneration, Parkinson’s disease, COPD, and diabetes. The injunction case against California Stem Cell Treatment Center is still pending.

US Stem Cell Clinic also marketed SVF to patients seeking for treatment for conditions such as Parkinson’s disease, amyotrophic lateral sclerosis (ALS), chronic obstructive pulmonary disease (COPD), heart disease and pulmonary fibrosis. Three women with macular degeneration, an eye disease that causes vision loss, went blind after receiving eye injections of SVF products from US Stem Cell Clinic. Following these events, in June 2019 a Florida judge ruled that the FDA is entitled to an injunction against US Stem Cell Clinic, meaning that the FDA has the authority to regulate them and stop them from providing potentially harmful products.

While this decision strengthened the position of the FDA as a regulatory body for regenerative medicine, businesses have found different tactics to continue selling unapproved products. After the court ruling, US Stem Cell Clinic stopped selling the fat-based procedure. However, it said that it would continue to offer other stem cell treatments. Instead of stem cells derived from fat, which was the topic of the injunction, the company would now harvest cells from patients’ bone marrow and other tissues to “treat” different conditions. Another company, Liveyon, was given a warning by the FDA in December of 2019 for selling unapproved umbilical-cord blood-based products that were tied to life-threatening bacterial infections. Liveyon has since halted the distribution of its products in the US but has opened a clinic in Cancun, Mexico where it has continues “treating” patients outside the scope of the FDA. Other companies have changed their terminology and marketing language to escape the FDA crackdown against stem cell clinics. Instead of using the phrase “stem cells” in their websites and advertising, they now use “cellular therapy” and “allografts.”

The FDA’s Regulatory Framework for Regenerative Medicine

The warnings and injunctions filed by the FDA against the aforementioned stem cell businesses were in conjunction with the comprehensive policy framework for regenerative medicine that the agency announced in November 2017. The policy framework aims to clarify which medical products are subject to the agency’s approval requirements and to streamline the review process for new regenerative therapies. 

In the case of cellular and tissue products/procedure, there is often a gray area concerning what should be considered medical products, which are under FDA oversight, and what should be considered an individualized treatment being performed by a doctor within their medical practice, which is not regulated by the FDA. Stem cell clinics have often used this ambiguity as justification to sell products without FDA approval. According to the new guidelines, for cells and tissue to be exempt from FDA regulation, several criteria should be met: 1) they must be taken from and given back to the same individual during the same surgery, 2) they must not undergo significant manufacturing (minimal manipulation), 3) they must perform the same basic function (homologous use) when re-introduced to the patient, 4) they must not be combined with another drug or device, and 5) the benefits and risks must be well understood. If any of these criteria are not met, the cell or tissue is considered a drug or biologic and is subject to pre-market review of the FDA. Some ambiguities still persist in the current form of the policy, such as what constitutes “minimal manipulation” and how to address nonhomologous use (i.e. the cells or tissues are used in ways other than its original function). The guidelines are an important starting point in determining which therapies are under the FDA’s purview and continued dialogue between the FDA and stakeholders involved in product development will provide more clarity about how products will be classified.

The policy framework also addresses how the FDA aims to implement the regenerative medicine provisions of the 21st Century Cures Act. Signed into law in 2016, the Cures Act is designed to expedite the development and review of innovative medical products. One of the new programs under this law is the Regenerative Medicine Advanced Therapy (RMAT) designation. A product is only eligible for RMAT designation if 1) it is a cell therapy, therapeutic tissue-engineering product, HCT/P, gene therapy, or combination product using any such therapy; 2) it is intended to treat, modify, reverse, or cure a serious condition; and 3) preliminary clinical evidence indicates that the therapy has the potential to address unmet medical needs for such condition. Stakeholders involved in product development strongly support the creation of this expedited review program. Meanwhile, others are concerned that the RMAT designation will lead to the approval of therapies based on fewer or smaller studies and, hence, treatment-related adverse events would emerge only after a product is on the market. But since RMAT therapies are intended to treat serious conditions, the risks may be acceptable and may be outweighed by the benefits to the patients. Nevertheless, postmarket studies would be essential and must be required to ensure the safety and efficacy of RMAT therapies. 

The establishment of these policy frameworks are definitely a step towards better regulation of the unbridled regenerative therapies. The increased enforcement of these new guidelines will hopefully dissuade unscrupulous businesses from taking shortcuts while encouraging legitimate companies to develop novel treatments. This will ensure that regenerative medicine will continue to be an exciting field that has the potential to provide innovative treatments that will improve human health. 

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

January 24, 2020 at 7:36 pm

Science Policy Around the Web January 20th, 2020

leave a comment »

By Emma Kurnat-Thomas, PhD, MS, RN

Image by Pettycon from Pixabay

Medicare 2020: Can’t We All Just Get Along. Maybe—First Arm Thyself with Facts. 

As political debates and coverage begin for the 2020 Election, health care policy, and the science that supports it, is on the agenda1. For better or for worse, Medicare statistics will be thrown down like a gauntlet, arguments will be had, and headlines will cause us to feel that imminent threat may be upon us if we choose poorly2. While we may need to carefully choose to avoid such topics at holiday gatherings and mixed company, we all have a civic responsibility to inform ourselves with the basics of the health policy terms that are coming down the pike—here is what to look for regardless of one’s political leaning. 

Medicare funding is usually placed within the broader framework of health care policy reform discussion, which is characterized the Affordable Care Act (ACA) structure, its ongoing judicial challenges and Republican-led repeal efforts3. Proposals and debates of their merits will largely rest on candidates’ views of federal versus state governance of health insurance coverage and access, affordability to healthcare. In the current ACA framework, the federal government provides a significant portion of subsidized health insurance coverage and sets the minimum threshold standard for market regulations4. However states have the flexibility to implement and regulate markets according to their policy goals and preferences4. A leading criticism of the current ACA structure is that court challenges and state preferences have resulted in a disjointed approach leaving glaring geographical outcomes in costs, coverage, and access to healthcare, particularly in rural and underserved populations or states that opted out of Medicaid expansion. 

Competing health care policy proposals for Election 2020 will aim to normalize these trends. Republican proposals will seek a federalist approach that maximizes state autonomy and flexibility in choosing insurance marketplace provisions, with an emphasis on experimentation with innovative state models that can reduce costs and provide coverage3-4. Democrat proposals will seek a stronger federalist approach, such as universal coverage, ‘Medicare for All’, single payer models, universal coverage, public options to strengthen Medicare, and supplemental mechanisms to support at-risk populations such as undocumented immigrants and citizens in non-Medicaid expansion states5. Regardless of the leanings of our political stripes, we can agree that keeping ourselves informed enough to follow the leading candidates’ proposals and submit an educated vote should be among our New Year’s 2020 resolutions6

  1. Simmons-Duffin, S. NPR. 1/14/2020. Medicare for All? A Public Option? Health Care Terms explained. Accessed 1/20/2020: https://www.npr.org/sections/health-shots/2020/01/14/796246568/medicare-for-all-a-public-option-health-care-terms-explained
  2. Doherty, T. Politico. 9/12/2018. Medicare’s time bomb, in 7 charts. Accessed 1/20/2020: https://www.politico.com/agenda/story/2018/09/12/medicare-baby-boomers-trust-fund-000694
  3. Chen, L. (2018). Getting ready for health reform 2020: Republicans’ options for improving upon the state innovation approach. Health Affairs. Accessed 1/20/2020: https://www.healthaffairs.org/doi/pdf/10.1377%2Fhlthaff.2018.05119
  4. Collins, S. & Lambrew, J. (2019). Federalism, the Affordable Care Act, and Health Reform in the 2020 Election. The Commonwealth Fund. Accessed 1/20/2020: https://www.commonwealthfund.org/publications/fund-reports/2019/jul/federalism-affordable-care-act-health-reform-2020-election
  5. Linke, C. & Fiedler, M. (2019). What would the 2020 candidates’ proposals mean for health care coverage. Brookings Policy 2020. Accessed 1/20/2020: https://www.brookings.edu/policy2020/votervital/what-would-the-2020-candidates-proposals-mean-for-health-care-coverage/
  6. Politico. (2020). Election 2020. Where Democratic candidates stand on Health Care and Science/Technology Policy Proposals. Accessed 1/20/2020: https://www.politico.com/2020-election/candidates-views-on-the-issues/health-care/

Written by sciencepolicyforall

January 21, 2020 at 10:06 am

Expedited Drug Approvals: When Speeding Saves Lives

leave a comment »

By: Maria Disotaur, PhD

Source: piqsels.com

Changes in laws and regulations have accelerated the drug approval process for rare and fatal diseases. Yet, some experts worry the process is now moving too fast, while others argue that slowing down the process could cost patients their lives. 

The first case of acquired immunodeficiency syndrome (AIDS) in the United States was reported in 1981. Ten years later, more than 250,000 Americans were living with the disease or had died from the epidemic. During this time, activist groups believed the drug approval process was unacceptably slow and possibly leading to the deaths of thousands of Americans. They demanded drugs be proven safe and effective at a faster rate because prior to 1992, the Food and Drug Administration’s (FDA) drug approval process could take two and a half to 8 years due to poor staffing and lack of resources within the agency. Protests at the FDA headquarters, led to the establishment of streamlined policies and regulations designed to speed the approval process for life-saving drugs for serious and often fatal diseases. 

In 1992, a series of complex regulations and processes were established to place life-saving drugs in the hands of patients as expeditiously as possible. Beginning with the Prescription Drug User Fee Act (PDUFA), the agency could charge pharmaceutical companies a $200,000 reviewer fee for a new drug approval application (NDA). This new policy increased agency funds, personnel, and reduced the amount of time to approve a new drug to approximately eighteen months. To further expedite the process the agency introduced accelerated approval and priority review. The former allowed the FDA to use a surrogate endpoint to approve a new drug for a serious medical condition with an unmet medical need. Priority review, required the FDA to review a drug within six months compared to the standard ten months, if the drug showed evidence of significant improvement in treatment, diagnosis, or prevention. These were followed by fast track designation in 1998 and breakthrough therapy designation in 2012, which were designed to expedite the development and review of life-saving drugs that, respectively, fulfilled an unmet need or were better than current market drugs. 

Since their introduction, these regulations have garnered two opposing groups: those that think the drug approval process is moving too fast and those that think it is not moving fast enough. Pharmaceutical companies, health professionals, and patient advocacy groups have argued that millions of Americans are suffering from rare and orphan diseases that require new or enhanced therapies. On the other hand, experts argue that the pathway to expedite drug approvals does not change the fundamental principles of testing the efficacy of a new drug through extensive preclinical research and clinical trials. A recent study published in the Journal of the American Medical Association (JAMA) points to some of the downfalls associated with the FDA’s accelerated approval process. The study looked at 93 cancer drugs approved by the FDA from 1992 to 2017 through the accelerated approval pathway and analyzed the results of confirmatory trials, which are phase 4 post-marketing trials required by the FDA to confirm the clinical benefit of a drug. The study showed that out of the 93 cancer drugs approved only 19 drugs had confirmatory trials that reported an improvement in the overall survival of patients. The study authors concluded that “it is important to recognize the clinical and scientific trade-offs of this approach” particularly since “the clinical community will have less information about the risks and benefits of drugs approved via the accelerated approval program” until confirmatory trials are completed to analyze the clinical benefit and survival for patients. Furthermore, others like Dr. Michael Carome, a physician and a director at the consumer advocacy group Public Citizen, have raised concerns about the medical value and cost of drugs that have limited scientific data. Particularly, the burden placed on families and patients for drugs that in some instances, or in the case of the article published by JAMA, 80% of instances, never prove effective and do not improve patient survival. This was the case for Eli Lilly’s drug Lartruvo, as reported by the Wall Street Journal last summer. In 2016, Lartruvo became the first drug approved by the FDA for soft-tissue sarcoma since the 1970’s. The drug was approved via the accelerated approval pathway after Elli Lilly completed a study with 133 patients showing that Lartruvo with chemotherapy treatment extended the median patient survival by 11.8 months compared to patients using chemotherapy alone. By April 2019, Elli Lilly announced it was removing Lartruvo from the market because in Phase 3 clinical trials, it did not show improvement in patient survival. The removal of Lartruvo from the market left patients dismayed and questioning the long-term effects of the treatment. Prior to Lartruvo’s approval, Dr. James Liebman, an oncologist from Beverly Hospital, and Dr. Hussein Tawbi, an associate professor at MD Anderson Cancer Center expressed concerns about the limited sample size and confounding results for Lartruvo and recommended the FDA delay the approval until other trials were conducted. At the time, the FDA acknowledged their concerns but also acknowledged that the treatments for advanced sarcoma were limited and that the drug could have a clinical benefit on the market. 

These critical decisions are becoming more routine as the FDA tries to meet the demands of doctors, patients, and lawmakers to approve drugs for fatal diseases at a faster rate. In 2009, only 10 drugs were approved through an expedited pathway – either fast-track, priority review, accelerated approval, or breakthrough therapy. Last year, this number increased to 43 drugs out of the 59 novel drugs approved. This jump can be partially accredited to the 21st Century Cures Act, which is designed to expedite the development of new devices and drugs and in some instances, the act allows the FDA to review anecdotal evidence such as patients’ perspectives when reviewing a new drug. Additionally, the increase can be attributed to new biological and diagnostic tools. For example, the use of flow cytometry and next generation DNA sequencing, which allows scientists to detect one cancer cell in a million healthy cells as reported by Scientific American last month. This new measure is called minimal residual disease (MRD) and scientists hope it can be used to accelerate clinical trials and the development of novel drugs. Currently, studies looking at B-cell acute lymphoblastic leukemia indicate MRD-positive patients are more likely to relapse and patients with more than 1 residual cancer cell in 10,000 cells lived approximately six months without relapse, while those that had less than 1 residual cancer cell in 10,000 lived an average of two years without relapse. Scientists hope this method can be used as a future surrogate endpoint and some patient advocacy groups believe that for some cancers like multiple myeloma there is enough evidence to use the method today. A study published in JAMA showed that multiple myeloma patients who were MRD-negative had reduced remission rates and had a 50% longer survival outcomes than those who were MRD-positive. The predictors of MRD indicate that for certain cancers scientists could use this as an alternative surrogate endpoint – it could speed up clinical trials compared to those that look at tumor shrinkage and overall survival. These types of new tools and diagnostics have been prompting patients and doctors to demand faster drug approvals, particularly for rare and life-threatening diseases. Furthermore, they have also compelled FDA officials and leaders like Janet Woodcock, director of the FDA’s Center for Drug Evaluation and Research, to acknowledge that new scientific tools and advancements will continue to speed up the drug approval process. Concurrently, the agency understands the demands of patients living with life-threatening illnesses and is preparing to enhance its best practices. In a recent-press release, the FDA announced it would begin the “reorganization of the office of new drugs with corresponding changes to the office of translational sciences and the office of pharmaceutical quality.” This strategic move aims to “create offices that align interrelated disease areas and divisions with clearer and more focused areas of expertise”. The goal is to enhance efficiency and provide FDA scientists with a better understanding of diseases that may require future FDA drug approval. These types of changes within the FDA infrastructure, along with biological advancements, will continue to impact the speed at which the FDA approves new drugs. Some individuals will argue that accelerating the process is reckless and a danger to vulnerable patients; however, for some of these patients accelerating the process can be the difference between life and death. For these patients, the question of expediting access to new treatments is not “Are we moving too fast?” it is “Can we afford not to?”

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

January 17, 2020 at 6:48 pm

Science Policy Around the Web January 16th, 2020

leave a comment »

By Andrew H. Beaven, PhD

Facts & Figures 2020 Reports Largest One-year Drop in Cancer Mortality

On January 11, 1964, the U.S. Surgeon General reported that cigarette smoking is a cause of lung cancer and laryngeal cancer in men, a probable cause of lung cancer in women, and the most important cause of chronic bronchitis. This led to the Federal Cigarette Labeling and Advertising Act of 1965 and the Public Health Act of 1969 that required warnings on cigarette packages, banned cigarette advertising in broadcasting media, and called for an annual report on the health consequences of smoking. 

Fifty-six years later, lung cancer is still the leading cause of cancer mortality in the U.S. – accounting for almost one-quarter of all cancer deaths. However, with an ever-increasing understanding of how to treat cancer and America’s general cessation, the American Cancer Society announced a 2.2% drop in the American cancer death rate between 2016 and 2017, the largest single-year drop in cancer mortality (statistics are reported in the American Cancer Society’s peer-reviewed journal, CA: A Cancer Journal for Clinicians). This substantial mortality rate decrease is primarily attributed to a decrease in lung cancer deaths. Coincidentally, the report aligns with recent legislation raising the age to buy tobacco products from 18 to 21 years old. This legislation was included in the federal year-end legislative package, passed by both houses of Congress, and signed into law on December 20, 2019 by President Donald Trump. The goal of the legislation is to keep tobacco out of teenager’s hands, with the hope that if teens do not start using tobacco early, they will never start using tobacco products.

(Stacy Simon, American Cancer Society)

NASA, NOAA Analyses Reveal 2019 Second Warmest Year on Record

New, independent analyses by U.S. federal agencies NASA and NOAA demonstrate Earth’s continuing warming. Global surface temperatures in 2019 were the second hottest since 1880 when modern recordkeeping began. These results, posted online January 15, continue the concerning trend – the past 5 years have been the warmest of the last 140 years (the hottest year was 2016). NASA and NOAA report temperature on a relative scale based on the mean temperature between 1951–1980. The 2019 anomaly was 1.8 ºF (0.98 ºC) warmer than the 1951–1980 mean. The report makes special note that average global warming does not imply that all areas experience the same warming. For example, NOAA reported that the contiguous 48 U.S. states experienced the 34th warmest year on record, simply giving it a “warmer than average” classification. Meanwhile, Alaska experienced its warmest year on record.

To account for biases, the scientists take into account the varied spacing of the temperature stations, urban heat island effects, data-poor regions, changing weather station locations, and changing measurement practices. Through continuing modeling and statistical analyses, scientists continue to conclude that this rapid uptick in temperature is because of increased greenhouse gas emissions caused by human activities.

(Steve Cole, Peter Jacobs, Katherine Brown, NASA)

Written by sciencepolicyforall

January 16, 2020 at 9:38 am

Science Policy Around the Web January 14th, 2020

leave a comment »

By Thomas Dannenhoffer-Lafage, PhD

Image by Pexels from Pixabay 

The FDA Announces Two More Antacid Recalls Due to Cancer Risk

The FDA has recently announced voluntary recalls of two prescription forms of ranitidine produced by the generic drug companies Appco Pharma and Northwind Pharmaceuticals. The recall was announced because the drug may contain unsafe levels of N-Nitrosodimethylamine (NDMA), a carcinogen. The FDA had announced in September that it discovered the drug contained NDMA but did not advise consumers to discontinue use of the drug. Ranitidine – commonly known as Zantac – is prescribed to 15 million Americans and is taken by millions more in over-the-counter versions. The drug was recently removed from the shelves of several retailers as a precaution. Zantac was once the best-selling drug in the world. 

The discovery of NDMA in ranitidine occurred when a mail order pharmacy company Valisure tested a ranitidine syrup. When the syrup tested positive for NDMA, Valisure tested other products containing ranitidine and found the same high amount of the carcinogen. Their findings were then reported to the FDA. According to the CEO of Valisure, the presence of NDMA in ranitidine could be due to chemical stability issues. 

The FDA did not recall the drug at that time because of the extreme conditions of the tests and claimed that less extreme conditions resulted in much smaller amounts of NDMA. Valisure also claimed that NDMA was found in high amounts in tests meant to simulate gastric fluid. However, when the FDA performed a similar test, they found no formation of NDMA. This may be due to the lack of sodium nitrate in the FDA’s tests. The FDA acknowledged this issue in testing by warning consumers to avoid food containing high amounts of sodium nitrate such as  processed meats if they wish to continue taking ranitidine. The FDA has also mentioned that the levels of NDMA found in ranitidine were comparable to what might be found in smoked or grilled meats.  

Several lawsuits have been filed asserting that Zantac has caused cases of cancer. However, experts point out that the likelihood of any individual getting cancer from taking the heartburn medicine is low. 

(Michele Cohen Marill, WIRED)

EPA Aims to Reduce Truck Pollution, and Avert Tougher State Controls

The Trump administration has announced a proposed rule change to tighten the pollution caused by trucks. Initiated by EPA head Andrew Wheeler, the new rule will limit emissions of nitrogen dioxide, which has been linked to asthma and lung disease. The change is predicted to curb nitrogen dioxide pollution more than current regulations, but will likely fall short of what is necessary to significantly prevent respiratory illness. 

The administration seems to be following the lead of the trucking industry, which lobbied for a new national regulation that will override state’s ability to implement their own rules, especially those of California. The EPA’s current rule, enacted in 2001, on nitrogen dioxide pollution from heavy-duty highway trucks required trucks to cut emissions by 95 percent over 10 years. This resulted in a 40-percent drop in nitrogen dioxide emissions across the nation. Although no law requires the EPA ruling to be updated, the Obama administration’s EPA had examined further cuts. The cuts were petitioned for by public health organizations and aimed to reduce emissions by another 90 percent by about 2025. California had begun the legal process to make such proposed cuts a reality, but Trump revoked California’s legal authority to set tighter standards on tailpipe emissions. 

This revocation has lead the EPA to move forward on the new rule that would only reduce emission by 25 percent to 50 percent. The trucking industry has pointed out that the current administration has gone to great lengths to understand how the EPA regulations affects them, something that was not standard practice under previous administrations. However, representatives from the American Lung Association have lamented that the current administration is not taking as much advice from major health and environmental groups as compared to previous administrations. 

(Carol Davenport, New York Times)

Written by sciencepolicyforall

January 14, 2020 at 10:30 am

Bias in Artificial Intelligence

leave a comment »

By: Thomas Dannenhoffer-Lafage, PhD

Image by Geralt from Pixabay

Artificial intelligence (AI) is a nebulous term whose definition has changed over time, but a useful modern definition is “the theory and development of computer systems that are able to performs tasks normally requiring human intelligence.” Modern AI has been applied to a wide variety of problems. Applications of AI include streamlined drug discoveryimage processingtargeted advertisementmedical diagnosis assistancehedge fund investment and robotics, and influences people’s lives more than ever before. These powerful AI systems have allowed for certain tasks to be performed more quickly than by a human, but AI also suffers from a very human deficiency: bias. While bias has a more general meaning in the field of statistics (a closely related field to AI), this article will specifically consider social bias, which exists when a certain group of people are given preference or disinclination.

To understand how AI becomes socially biased, it is critical to understand how certain AI systems are made. The recent explosion of AI breakthroughs is due in part to the increased capabilities of machine-learning algorithms, specifically deep-learning algorithms. Machine-learning algorithms differ from human-designed algorithms in a fundamental way. In a human designed algorithm, a person must provide specific instructions to the computer so that a set of inputs can be changed into outputs. In machine-learning algorithms, a person provides a set of data to an algorithm and that algorithm learns how to perform a specific task given the patterns within the data. This is the “learning” in machine-learning. There are many different types of tasks that a machine-learning algorithm can perform, but all are ultimately reliant on some data set to learn from. Reasons why deep-learning algorithms have become so powerful include the large amounts of data available to train algorithms, the availability of more powerful and inexpensive GPU cards that greatly improve the speed of AI algorithms, and the increased availability of online deep learning source codes.

An AI can be trained to be biased maliciously, but more concerning in that bias can be incorporated into AI unintentionally. Specifically, inadvertent bias can creep into machine-learning based AI in two ways: when bias is inherent to the data and when bias is represented in the tasks that the AI is performing. Inherent bias can occur when the data is not representative of reality, such as certain populations being inadequately represented. For example,  this occurred in a facial recognition software that was trained with more photos of light-skinned faces than dark-skinned faces and was less effective at identifying darker-skinned ones. This resulted in a program that was less effective at identifying dark-skinned faces, leading to errors and misidentifications. Bias can also be inadvertently included into data during featurization. Featurization is a process where sophisticated data is manually modified and proofed before it is presented to an AI to improve its learning rate and task performance. Human agents may unknowingly introduce bias into a dataset while performing featurization. Finally, bias often exists in the task that an algorithm is asked to perform. Tasks given to AI algorithms are usually decided upon for business reasons and questions of fairness are therefore typically not considered. 

When a machine-learning based AI is trained on a dataset that is biased it can lead to serious consequences. For instance, the recently introduced Apple credit card used AI to determine the credit worthiness of applicants. Issues were raised about the validity of this system when Steve Wozniak pointed out that his wife received ten times less credit than him, despite their credit profiles being nearly identical. There have also been issues of bias in AI systems used in school admissions processes and hiring platforms. For example, an AI algorithm was tasked with reviewing application materials for an open job position. It was determined that this algorithm was unknowingly biased against women applicants because of differences in language between male and female applicants. Bias was also an issue in an AI algorithm which was used to determine likelihood of recidivism amongst parolees. This system found that African Americans were at higher risk of recidivism than in reality. Since AI has been trusted with greater decision-making power than it in the past, it has the power to propagate bias at a much greater rate than ever before. 

Even though bias has been a known issue in AI for many years, it still is difficult to fix. One major reason for this is that machine-learning algorithms are designed to take advantage of patterns, or correlations, in data that may be impossible for a human to see to aid in its decision making. This could create problems because AI may use artefacts from certain data as indication that a certain decision should be made. For instance, certain medical imaging equipment may have slightly different image quality or deal with boundary conditions differently, which an AI algorithm may use to determine a diagnosis. Another issue is that, while you may not be directly providing an AI algorithm biased data (e.g. including race, age, etc), the AI may be able to infer it. For instance, you may provide height in a data set which an AI trains on, it may be able to determine if an applicant is male since men, on average, are taller than women. This problem of invisible correlations is compounded by the fact that AI are unable to explain their decisions and backtracking how decisions were made can be impossible. Finally, AI are designed to perform tasks as successful as possible and are not designed to take fairness into account at the design stage. 

Thankfully, different solutions to bias in AI algorithms have been proposed. One possibility is to include fairness metrics as part of the design process of AI. An example of this is known as counterfactual fairness. An algorithm satisfies counterfactual fairness when a prediction based on an individual’s data is the same as counterfactual data where all factors are the same except for the individual’s group membership. However, it has been shown that certain fairness metrics may not be satisfied simultaneously because each of the metrics constrain the decision space too greatly. Another solution is to test AI algorithms before deployment and ensure fairness by ensuring that the rate of false positives and false negatives are equal amongst protected groups. New technologies may also help fight AI bias in the future. One solution is Human-in-the-loop decision making which enables a human agent to review the decisions of an AI machine to fight false positives. New technologies may also help fight AI bias in the future, such explainable AI, which is able to explain its decisions to a human. Other solutions include having groups developing AI engage in fact-based conversations about bias more generally. This would include trainings that identify types of biases, their causes, and their solutions. A push for more diversity in the field of AI is a necessary step because team members who are part of majority groups can disregard differing experiences. Lastly, it is suggested that AI algorithms should be regularly audited to ensure fairness both externally and internally.

The discussion of AI bias within the field has come a long way in the last few years. Companies involved in AI development are now actively involving people whose main role is to fight AI bias. However, most of the regulation of biased AI still occurs internally. This has promted the involvement of actors outside of AI development, including the government and the lawyers, to look at AI bias issues as well. Recently, lawmakers have introduced the Artificial Intelligence Initiative Act which aims to establish means for responsible delivery of AI. The bill calls for NIST to create standards for evaluating AI, including the quality of the training sets. The NSF would be called to make responsible training programs for AI use and training that would address algorithm accountability and data bias. The bill does not propose guidelines or timelines for governmental regulations but rather just creates organizations to advise lawmakers and perform governmental research. Regulations would be imperative if AI were to move away from self-regulation. The decision-making power of AI has also gotten the attention of lawyers in the field of labor and employment, who fear that today’s AI now “have the ability to make legally significant decisions on their own.” Thus, there is more opportunity than ever for technologist to impact AI on the policy level by being involved in governmental organization creating the policy, educating lawmakers involved in AI law and regulations, alerting the public to bias and other issues, and working on industry-specific solutions directly.  

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

January 10, 2020 at 1:58 pm