Science Policy For All

Because science policy affects everyone.

Archive for the ‘Essays’ Category

Breast cancer screening: How do we maximize benefit while minimizing harm?

leave a comment »

By: Catherine Lerro, Ph.D., M.P.H.

Image by Bruno Glätsch from Pixabay 

Breast cancer is the most commonly diagnosed tumor in US women, and the second leading cause of cancer death in women with an estimated 268,600 diagnoses and 41,760 deaths predicted for 2019. Despite these seemingly sobering numbers, mortality due to breast cancer has declined over the past several decades. Today, women diagnosed with early stage disease are about 99% as likely to be alive five years after their diagnosis as cancer-free women. These declines in cancer death have been largely attributed to both improvements in treatment and successful implementation of mammography (breast x-ray) screening programs, considered a hallmark of preventative cancer care. Some researchers estimate that upward of 380,000 breast cancer deaths have been averted since 1989 due to mammography and improved breast cancer treatment. In fact, the Affordable Care Act has provisions to ensure that women with private health insurance, public health insurance (e.g. Medicare, Medicaid), or health insurance purchased through a state exchange are covered for breast cancer screening. 

The idea behind mammographic screening is that breast cancers diagnosed at an early stage are more likely to respond well to treatment, preventing cancer-related death. While not all cancers have population-wide screening programs, breast cancer is a good candidate for screening. First, breast cancer is common enough to warrant subjecting women to a mammogram at regular intervals during a defined period of known risk. If a disease is very rare, it is likely not a good candidate for population-wide screening because the costs would outweigh the potential benefit. Second, there must be a good test available that is both sensitive and specific. In other words, the test should detect as many true cases as possible, while minimizing the number of patients with false-positives that require more invasive testing such as a biopsy. Finally, there must be some benefit to detecting disease early. For breast cancer, women with early stage disease may be more easily treated and have better prognosis compared to women with distant-stage disease.

Currently, mammograms are recommended for much of the adult female population in the US over the age of 50. Many different organizations release breast cancer screening guidelines on a regular basis including (but not limited to) the US Preventive Task Force, the American Cancer Society, the American College of Obstetricians and Gynecologists, and the American College of Radiology. While the recommendations share some similarities, there are important differences and no one guideline is universally accepted. For example, for women ages 50-74, the US Preventative Task Force recommend biennial mammograms, while the American College of Radiology recommends yearly mammograms. These differences may arise from the data used to develop the guidelines and how the data are valued. For example, the US Preventive Task Force counts mortality reduction as the soul benefit of mammography and considers potential risks such as false-positive tests. The American College of Radiology considers other mammography benefits outside of morality reduction such as less aggressive treatment for early stage cancers. The American College of Radiology also have recently amended their guidelines to consider race, with the option to screen African American women, who are at greater risk of more aggressive breast cancers, starting at younger ages at the discretion of both the patient and physician. 

Understanding how and if breast cancer screening guidelines are integrated into clinical practice is a murkier area still. In recent years, most major guidelines recommend less routine screening and have endorsed a more individualized approach that involves discussion of the benefits and harms of screening and incorporates patient preferences and beliefs, especially for younger women. However, studies have found that despite these changes in recommendations, breast cancer screening in practice in the US has changed very little. This may be driven by US health system traits, such as fee-for-service payment systems and concerns about litigation. Furthermore, both clinicians and patients may overestimate the benefits and underestimate the harms of mammography, particularly for younger women.

The benefits of diagnosing breast cancer early cannot be overstated, as response to treatment and survival depends greatly on stage at diagnosis. However, the potential harms of screening are often overlooked. Of course, there are economic costs incurred for any wide-scale screening program. Just as importantly, we should seriously consider the physical and emotional costs of overdiagnosis and overtreatment. A 2018 report in the Journal of the American Medical Association found that for every 10,000 women screened for breast cancer, more than half under the age of 60 will experience a false positive test result. Almost 10% of women will undergo at least one unnecessary biopsy. Additionally, the authors demonstrated that through screening more women were potentially overdiagnosed (cancers diagnosed and treated that would have never become clinically evident) than deaths were averted. There may be psychological consequences to false positive test results, including both short-term and long-term anxiety. Unnecessary biopsy and overdiagnosis could potentially have long-term physical health consequences that would otherwise be avoided. 

How do we improve mammography screening in the US, maximizing the benefits while minimizing the risks? What is clear is that there is no simple solution. In a health system that largely favors more testing at potential cost to patients, institutional changes in how health insurance reimburses clinicians for care should consider looking beyond fee-for-service models. The newest breast cancer screening guidelines also favor individualized approaches, prioritizing screening among high-risk women and educating patients about the potential benefits and harms of screening with full consideration of their own medical history and preferences. Clinicians may consider tools that utilize detailed patient information to assess an individual patient’s risk of breast cancer, as well as tools soliciting patient preferences that support shared decision-making. Finally, it is important that all women requiring regular mammograms have access to breast cancer screening and high-quality treatment regardless of age, race, geographic location, or socioeconomic status, in order to minimize disparities in stage at diagnosis and breast cancer survival. 

Have an interesting science policy link? Share it in the comments!


Written by sciencepolicyforall

May 15, 2019 at 11:30 am

Posted in Essays

Tagged with , , ,

Living in America with a chronic disease: Drug prices here and why they are so high.

leave a comment »

By: Mohor Sengupta Ph.D.

Image by Liz Masoner from Pixabay 

The USA has the highest average prices on drugs compared to all other developed nations across the globe. The average expenditure on drugs per person is around $1200 per year in the U.S., while it is roughly $750 in Canada, according to a 2014 survey. Let us look at a specific example. Nexium is a drug that helps reduce stomach acidity. It is manufactured by AstraZeneca in Sweden and sold to customers in the U.S., Canada, U.K., Australia, New Zealand, India and Turkey. The 40 mg pill costs $3.37 in Canada, $2.21 in the U.K., Australia and New Zealand, less than 37 cents in India and Turkey and $7.78 in the U.S. Specialty medicines, like those used for cancer can cost $10,000 a month in the U.S

Fred Smith, whom I interviewed recently, is a 26-year-old freelance musician and trumpet instructor. Shortly after his 26thbirthday, his health insurance coverage under his mother’s provider plan ended. He went on to buy his medical insurance from the private provider Blue Cross Blue Shield only to realize that he had to pay nine times the cost for each of two medicines, Vyvanse and Viibryd, and 18 times the cost for a third medicine, Adderall, compared to the amount paid while on his mother’s insurance. 

So why do Americans pay more for their medicines? 

  • Drug manufacturers in the U.S. can set the price of their products. 

While this is not the norm elsewhere in the world, federal law in the U.S. does not allow FDA or public insurance providers to negotiate drug prices with manufacturers. Medicare Part D is a 2003 legislation that prevents the nation’s largest single-payer health system from negotiating drug prices. Medicaid, which is the public healthcare program for people with limited income and resources, must cover all FDA-approved drugs, irrespective of the cost. However, drug makers must provide rebates to the government for drugs billed to Medicaid. In general, the biggest cost of medicines is borne by Medicare and private insurers. Private insurance providers do not usually negotiate prices with drug manufacturers. This is because middlemen or third-party pharmacy benefits managers that administer prescription drugs, such as CVS Health, receive payments from drug companies to shift market share in favor of these insurers. These deals also leave consumers with a limited choice. 

Drug makers in the U.S. not only set their own prices but they are also authorized to raise prices. Martin Shkreli became the “most hated man in America” overnight when he raised the price of a generic anti-parasitic drug Daraprim from $13.5 a pill to $750 a pill, a 5000% increase. Mr. Shkreli explained to critics that the hike was warranted because Daraprim is a highly specialized medicine and likened it to an Aston Martin previously sold at the price of a bicycle. He added that the profits from the price increase would go into improving the 62-year-old recipe of the drug. 

Deflazacort, a steroid used to treat Duchenne muscular dystrophy, is a generic compound that has been available worldwide for decades and costs $1000-$2000 per year. Yet, Illinois-based Marathon Pharmaceuticals acquired FDA approval to sell deflazacort under the brand-name Emflaza at $89,000 per year. 

Speaking of generic drugs, here is the next big reason for unaffordable brand-name medicines. 

  • Government-protected monopolies for certain drugs prevent cheaper generics from entering the market. 

The U.S. has a patent system that allows brand-name drug makers to retain exclusive selling rights for 20 years or more. Makers of drugs for rare diseases can also enjoy indefinite monopoly of sale. Moreover, these rare drug makers can extend their solo market dominance by making minor and non-therapeutic modifications to the patented product, like changing the dye component in the coating. They also often pay generic manufacturers to delay their products from entering the market. 

Additionally, FDA approval of generics following expiration of brand-name drug patents can be a long process; it can take up to 3-4 years for generic drug manufacturers to get FDA approval. It is estimated that prices of generic medicines fall to 55% of the brand-name medicine price once two generics enter the market and 33% of the brand-name cost when five generics become available. 

However, why would a brand-name manufacturer applying for a patent cite an unaffordable price to begin with?

  • Unjustified cost of research and development are cited by drug makers. 

It is generally agreed among critics that drug makers put an unjust price on their product citing the research that went into producing it. Because most of the R&D is funded by the National Institutes of Health via federal grants or by venture capital, the cost of research cited by the drug makers is above exaggeration. In reality, companies spend no more that 10-20 percent of their revenue on the research. 

Sofosbuvir was made by Michael Sofia, a scientist with a Princeton-based pharmaceutical company called Pharmasset. He even received the 2016 Lasker-DeBakey Clinical Medical Research Award for inventing it. Sofosbuvir is recommended for management of hepatitis C. After Gilead Sciences acquired Pharmasset for $11 billion in 2011, it applied to FDA for a new drug combining Sofosbuvir and Ribavirin, first made in 1972 by scientists at International Chemical and Nuclear Corporation (currently Canada-based Bausch Health Companies). Gilead priced their product at $84,000 for a single course of treatment in the U.S. The pricing caused a huge controversy when patients on Medicaid were denied the drug until becoming seriously ill. Moreover, generic licensing agreements to produce Sofosbuvir in 91 developing countries, which bear the burden of more than half the world population with hepatitis C, came under fire when Gilead asked for prices unaffordable by consumers in these countries. 

This brings us to the final cause of high drug prices. 

Doctors are often unaware that their prescriptions could be cheaper for their patients if they purchased two generic medicines instead of the brand-name prescription drug that is just a combination of the two. Vimovo, manufactured by Horizon Pharma, is a drug used to treat symptoms of osteoarthritis, rheumatoid arthritis, and ankylosing spondylitis. It is a combination of two generic medicines, naproxen (brand-name Aleve) and esomeprazole (brand-name Nexium). Naproxen is the anti-inflammatory component (NSAID) and esomeprazole is the aforementioned stomach-acidity reducer. It is added to the combination to reduce side effects of the NSAID. Whereas a month’s supply of Aleve and Nexium cost one patient $40, his insurance company was billed $3252 for the same supply of Vimovo. Moreover, not everyone who uses NSAIDs experiences stomach problems and do not need the additional esomeprazole component. 

Several Americans do not fill their prescriptions because they cannot afford to. Data show that 36 million Americans between the ages of 18 to 65 did not fill their prescriptions in 2016. Many resort to buying medicines online from foreign sellers or get them imported. Both routes are illegal and therefore we do not know the exact percentage of the population participating in these practices. 

I interviewed Tammy Connor, who regularly gets her medications from abroad. Tammy takes Synthroid, a brand-name drug, which is used to manage symptoms of hypothyroidism. She has been procuring it from Canada at 1/3rd its U.S. price for many years. In the middle of 2018, the U.S. began blocking drug purchases from Canada, preventing her from continuing this cost-saving practice. Eventually, she got a referral to a U.K.-based drug company called Medix Pharmacy, where she pays 1/3rd the amount that she would have to pay if she purchased Synthroid from the U.S. “Ironically, Medix gets its Synthroid supply from Canada”, Tammy said.

Big Pharma” is a major lobbying group in the U.S. This is a group of a few gigantic pharmaceutical companies which have together kept their profit margins rising amidst public outcry of drug unaffordability. Big Pharma also includes corporations that push overpriced drugs to customers. With their deep pockets, they can spend astronomical amounts on advertising and lobbying. 

Unaffordable prices of life-saving medicines cause many people to skip taking necessary medications, thanks to the Big Pharma. Now, more than ever, is the time that something was done about this. 

Recommended links: 


Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

May 9, 2019 at 4:23 pm

Keep your head in the game… or don’t: The link between football and brain injury

leave a comment »

By: Saroj Regmi, Ph.D.

Image by WikiImages from Pixabay 

American football is a team sport that enjoys wide popularity and an extensive fan following. For over 30 years it has reigned as the most popular sport in the US. In recent years, however, it has remained at the forefront of controversy due to growing concern over long term health effects.

The safety of football was initially brought into question by a study in 2002 from  Dr. Bennet Omalu, a neuropathologist working in Pittsburgh. Dr. Omalu, whose efforts were portrayed in the popular movie Concussion starring Will Smith, discovered the link between chronic traumatic encephalopathy (CTE) and American football players. He performed an autopsy of former Pittsburgh Steelers player Mike Webster and established that CTE, a disease previously ascribed to boxers, also occurred in football players. Forensic analysis of Mike Webster’s brain, who struggled for years from mood disorders, depression, suicidal thoughts – symptoms associated with CTE – showed large accumulation of tau protein. Although the pathogenesis of CTE remains poorly understood, it is believed that clumping of the protein tau, also seen in Alzheimer’s patients, leads to death of brain cells. A series of publications have followed suit since 2002 and have included post-mortem analysis of the brains of some former NFL players including Terry Long, Justin Strzelczyk, Andre Waters and Tom McHale. From all these studies, the message is loud and clear: there is a strong link between tackle football and CTE.

With each new scientific report, the relationship between CTE and contact football became clearer. A recent study in the Journal of American Medical Association involving brains of  deceased people that played football at various levels, from high school to NFL, identified CTE in 87% of the players. Even more remarkably, it identified CTE in 99% of NFL players – a shocking number. In the study, the authors also argue that the disease risk and severity might be a result of age at first exposure to football and the duration of play as well as various other factors. This means that even limited exposure to contact football can significantly increase your chances of suffering from CTE. 

CTE, also referred to as “punch drunk syndrome”, as of yet is not treatable and research studies on the disease have been limited. Investigation of CTE pathogenesis is further complicated by the fact that a definitive diagnosis is only possible post mortem. Given the widespread impact of the disease, a recent push has been made by researchers to identify biomarkers of CTE in living patients. A recent collaborative study between the Concussion Neuroimaging Consortium and Orlando Health tested blood based biomarkers and were able to identify elevated levels of microRNAs in the blood of college football players. The report, published in Journal of Neurotrauma, demonstrated that these biomarkers were high in these players even prior to head injury for the season. This means that head injuries have a lasting effect and these biomarkers can identify head injuries incurred in previous seasons. Cognitive tests involving study participants demonstrated that the players who struggled with memory and balance had much higher levels of microRNAs than those who did not. Over the years, the researchers hope to use these microRNA biomarkers to identify at-risk athletes.

A recent report, published in The New England Journal of Medicine, has been a game changer in our understanding of CTE in NFL players. By taking brain scans of 26 former players at varying levels of symptoms associated with CTE, the study has taken an unbiased approach to analyze the severity of CTE in professional NFL players. The study used positron emission tomography (PET) scans to determine that NFL players had higher levels of abnormal tau protein in disease associated parts of the brain in comparison to men of similar age that had not played football. In contrast to some of the previous studies, the results of the report did not reveal a correlation between the severity of tau accumulation and the degree of cognitive issues associated with CTE. A correlation between tau accumulation and total years of playing football was seen. Therefore, while tau deposition can serve as a biomarker of CTE, levels of tau accumulation does not determine the severity of the disease. Interestingly, the study also found one former player that had levels of amyloid-beta deposition comparable to that of an Alzheimer’s patient. While the study provided a lot of answers, it also raised a wealth of different questions. It is still unclear whether tau accumulation is faster in people with repeated head trauma. Also, how the accumulation of tau leads to behavioral alterations associated with CTE remains a complete mystery.Although the report was careful to highlight that this imaging-based approach is still in its infancy and that it could take years to develop a proper diagnostic test for the disease, the results of the analysis are definitely encouraging. This is the first ever reported study to utilize tau imaging in living players. 

A major takeaway from these studies is that although CTE remains poorly characterized with symptoms ranging from forgetfulness to suicidal thoughts, it is almost invariably caused by concussions and head injuries resulting from contact football. What is terrifying is that CTE can occur not only in professional football players but also in high school students that play football. These reports bring into question the safety of the sport in its current state. With over a million high school students engaged in the sport, a radical rethinking of the game is required to make it a safe and fun activity that youngsters can partake in without risking their health. 

Recently, the Canadian league instituted a ban on full contact practices to reduce collision during practice. The league has also increased time between games so that the players are afforded a longer recovery time. Similar approaches have also been made by the Ivy League. There is also a need of policies to ensure that the general public is aware of the risks, particularly children and their parents. HEADS UP is one such program initiated by the CDC that provides online training course for health care providers and high school sports coaches. Efforts have also been made in recent years, at both state and federal levels to reduce concussion in the youth. Although not monumental, these efforts are an important step in the right direction.

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

May 1, 2019 at 4:39 pm

Posted in Essays

Tagged with , ,

Recent trends and emerging alternatives for combating antibiotic resistance

leave a comment »

By: Soumya Ranganathan, M.S.

Image by Arek Socha from Pixabay 

Antibiotic resistance is an ongoing and rising global threat. While there is a tendency for bacteria and other microbes to develop resistance to antibiotics and antimicrobials slowly over time, the overuse and abuse of antibiotics has accelerated this effect and has led to the current crisis. The new Global Antimicrobial Surveillance System (GLASS), developed by the World Health Organization (WHO), reveals antibiotic resistance is found in 500,000 people with suspected infections across 22 countries. A study supported by the UK government and the Wellcome Trust estimates that antimicrobial resistance (AMR) could lead to an annual death toll of about 10 million by 2050. It is also predicted to have a huge economic impact and could cost 100 trillion USD between 2017 and 2050

Factors underlying the non-targeted use of antibiotics

Prescribing the right antibiotic for an infection takes about a week due to the process of identifying the infectious agent. To avoid the spread of infection, physicians are forced to make a prognosis prior to agent identification, and typically prescribe a broad-spectrum antibiotic. Since the broad-spectrum antibiotics act against a wide range of bacterial strains, their rampant use has led to the emergence of bacterial strains which are resistant to even the most potent antibiotics available. This trend has caused difficulty in treating previously curable hospital acquired infections and other benign infections. Not only is the discovery of new antibiotics is complicated (only one new class of antibiotics has been developed in the past three decades), the development of antibiotics, from discovery to medicine,  also in general takes about 1 to 2 decades. Here we will explore certain alternative strategies scientists all around the world are pursuing in their fight against antibiotic resistance. 

Antibiotic Susceptibility Test  

Reducing the time between a patient becoming ill and receiving treatment is critical for containing and effectively treating the infection. A part of this process that is currently required entails making improvements to the antibiotic susceptibility testing (AST) system, which typically has two steps: (i) Identifying the infectious agent and (ii) Identifying the most effective antibiotic to treat the infection.

Conceptually, new and rapid AST systems have been proposed and developed thanks to advancements in phenotyping methods, digital imaging and genomic approaches. But a plethora of factors act as roadblocks for implementing rigorous and standardized AST systems worldwide. A recently published consensus statement explores the major roadblocks for the development and effective implementation of these technologies while also suggesting ways to move past this stalemate. The major points of the statement are summarized below. 

  • Regulation– Since different regions and countries have their own requirements for marketing and validating a diagnostic method, the onus is on the developers to meet various demands. This also requires harmonization and cooperation among policy makers to formulate and agree on a standard set of rules.
  • Collection and dissemination of information regarding various strains and antibiotics– Antibiograms are a summary of antimicrobial susceptibility rates for selected pathogens to a variety of antimicrobial drugs, provide comprehensive information about the local antibiotic resistance. The challenge here lies in making the data available in real time and in developing a “smart antibiogram”.This is necessary to perform quicker analysis of samples and to reduce the time to treat which eventually translates to increase in lives saved. 
  • Cost involved in developing new, sensitive, and faster diagnostics– Though current diagnostics are cheap they are slow in identifying pathogenic bacteria. The transition to more advanced and sensitive diagnostics has been slow since their developmenttake time and incur more cost. However, this scenario is likely to change soon with the rising levels of antibiotic resistance that are making existing diagnostics obsolete, effectuating more investment in this sector. 

Antivirulence therapy

Small molecules are gaining prominence as an alternative or as adjuvants to antibiotic treatments. Recently, researchers from Case Western University have developed two small molecules F19 and F12 that show promise in the treatment of methicillin resistant Staphylococcus aureus(MRSA) infection in mouse models. The small molecules bind to a Staph. aureustranscription factor called AgrA, deterring it from making toxic proteins and rendering the bacteria harmless. Treatment with F19 on its own resulted in 100% survival rate in a murine MRSA bacteremia/sepsis model while only 30% of untreated mice survived. This kind of antivirulence therapy allows the immune system to clear the pathogens (since the bacteria are essentially harmless) without increasing pressure to develop resistance. When used as an adjuvant with antibiotic, F19 resulted in 10X times lesser bacteria in the mouse bloodstream than treatment with antibiotic alone. This kind of combination therapy can be used on immunocompromised patients. It has also been effective against other bacterial species such as Staph. epidermidis, Strep. pyogenes, and Strep. pneumoniaeand may act as arsenal for a broad variety of gram-positive bacterial infections. Overall the small molecule approach could also bring many of the previously shelved antibiotics back to use as they provide means to improve their efficacy in treating bacterial infections. Another class of engineered proteins called Centyrins show promise to treat Staph. aureusinfection using a similar mechanism, as they bind to the bacterial toxins and prevent them from disrupting the immune system. 

Molecular Boosters

Stanford University chemists (study published in Journal of the American Chemical Society) have developed a booster molecule called r8 which when used in combination with vancomycin (first line antibiotic used for MRSA infections), helps the antibiotic penetrate the biofilm and remain for a long time, enabling it to attack pathogens once they resurge from their dormant stage. This small molecule booster approach could be pursued further to provide existing antibiotics with additional abilities in sieging the pathogens and arresting the spread of infections.


A recent collaborative effort by scientists from Purdue University and Boston University has resulted in an innovative light-based approach called photobleaching (using light to alter the activity of molecules) to treat certain bacterial infections. Photobleaching of MRSA using low-level blue (460nm) light has been found to lead to the breakdown of STX, an antioxidant (pigment) found in the membrane of bacteria. Since STX protects the bacteria against neutrophils (a class of white blood cells involved in body’s immune mechanism), prior attempts have been made using medication to eliminate the STX but those efforts have been futile. Photolysis of STX leads to transient increase in the permeability of bacterial membrane, rendering them more susceptible to even mild antiseptics like hydrogen peroxide and other reactive oxygen species. Since pigmentation is a “hallmark of multiple pathogenic microbes” this technology could be extended for use in other microbes to tackle resistance. In addition to advantages such as ease of use and development, photobleaching could also be used with minimal or no adverse side effects. 

Antisense Therapy

One of the consequences of the non-targeted use of antibiotics to treat infections has been the occurrence of C.difficileinfection in the colon. This condition is due to the elimination of useful bacteria along with the harmful bacteria in the gut. To tackle this infection, Dr. Stewart’s team from the University of Arizona has developed an antisense therapy which act by silencing genes responsible for the survival of pathogenic bacteria while sparing other useful bacteria in the gut. This strategy involves using molecules with two components – an antisense oligonucleotide moiety that targets the genetic material in C.diffand a carrier compound to transport the genetic material into the bacterium. Though this treatment approach shows potential in providing a targeted, less-toxic, nimble and cost-effective alternative against existing and evolving pathogens, clinical trials must be undertaken to see its effects in practice.

Future perspectives

In addition to the aforementioned strategies, the scientific community is pursuing immune modulation therapy, host-directed therapy, and probiotics to deal with the current AMR crisis. The problem with developing new antibiotics is that microbes will eventually develop resistance to them. Though time will reveal the approaches that are truly effective in evading antibiotic resistance, the looming threat must be dealt with prudently. A holistic approach to restrict and channel the targeted use of antibiotics while pursuing alternative therapies must be adopted by the clinicians, researchers, companies, global health experts, public and policy makers to curb the resistance emergency.

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

April 26, 2019 at 4:35 pm

Posted in Essays

Tagged with , , ,

The need for regulation of artificial intelligence

leave a comment »

By: Jayasai Rajagopal, Ph.D.

Source: Wikimedia

The development and improvement of artificial intelligence (AI) portends change and revolution in many fields. A quick glance at the Wikipedia article on applications of artificial intelligence highlights the breadth of fields that have already been affected by these developments: healthcare, marketing, finance, music and may others. As these algorithms increase their complexity and grow in their ability to solve more diverse problems, the need to define rules by which AI is developed becomes more and more important.

            Before explaining the potential pitfalls of AI, a brief explanation of the technology is required. Attempting to define artificial intelligence begs the question of what is meant by intelligence in the first place. Poole, Mackworth and Goebel clarify that for an agent to be considered intelligent, they must adapt to their surrounding circumstances, learn from changes in those circumstances, and apply that experience in pursuit of a particular goal. A machine that is able to adapt to changing parameters, adjust its programming, and continue to pursue a specified directive is an example of artificial intelligence. While such simulacra are found throughout science fiction, dating back to Mary Shelly’s Frankenstein, they are a more recent phenomenon in the real world. 

            Development of AI technology has taken off within the last few decades, as computer processing power has increased. Computers began successfully competing against humans in chess as early as 1997 with DeepBlue’s victory over Garry Kasparov. In recent years, computers have started to earn victories in even more complex games such as Go and even video games such as Dota 2. Artificial intelligence programs have become common place for many companies which use them to monitor their products and improve the performance of their services. A report in 2017 found that one in five companies employed some form of AI in their workings. Such applications are only going to become more commonplace in the future.

In the healthcare field, the prominence of AI is readily visible. A report by BGV predicted a total of $6.6 billion invested into AI within healthcare by the year of 2021. Accenture found that this could lead to saving of up to $150 billion by 2026. With the recent push towards personalized and precision medicine, AI can greatly improve the treatment and quality of care. 

However, there are pitfalls associated with AI. At the forefront, AI poses a potential risk for abuse by bad actors. Companies and websites are frequently reported in the news for being hacked and losing customer’s personal information. The 2017 WannaCry attack crippled the UK’s healthcare system, as regular operations at many institutions were halted due to their compromised data infrastructures. While cyberdefenses will evolve with the use of AI, there is a legitimate fear that bad actors could just as easily utilize AI in their attacks. Regulation of use and development of AI can limit the number of such actors that could access those technologies.

Another concern with AI is the privacy question associated with the amount of data required. Neural networks, which seek to imitate the neurological processing of the human brain, require large amounts of data to reliably generate their conclusions. Such large amounts of data need to be curated carefully to make sure that identifying information that could compromise the privacy of citizens is not easily divulged. Additionally, data mining and other AI algorithms could information that individuals may not want revealed. In 2012, a coupon suggestion algorithm used by Target was able to discern the probability that some of their shoppers were pregnant. This proved problematic for one teenager, whose father wanted to know why Target was sending his daughter coupons for maternity clothes and baby cribs. As with the cyberwarfare concern, regulation is a critical component in protecting the privacy of citizens.

Finally, in some fields including healthcare, there is an ever present concern that artificial intelligence may replace some operations entirely. For example, in radiology, there is a fear that improvements in image analysis and computer-aided diagnosis by the use of neural networks could replace clinicians. For the healthcare field in particular, this raises several important ethical questions. What if the diagnosis of an algorithm disagrees with a clinician? As the knowledge an algorithm has is limited by the information it is exposed to, how will it react when a unique case is presented? From this perspective, regulation of AI is important not only to address practical concerns, but also pre-emptively answer ethical questions.

While regulation as strict as the Asmiov’s Three Laws may not be required, a more uniform set of rules governing AI is required. At the international level, there is much debate among the members of the United Nations as to how to address the issue of cyber security. Other organizations, such as the European Union, have made more progress. A document recently released by the EU highlights some ethical guidelines which may serve as the foundation for future regulations. At the domestic level, there has been a push from scientists and leaders in the field towards harnessing the development of artificial intelligence for the good of all. In particular, significant headway has been made in the regulation of self-driving cars. Laws passed in California restrict how the cars can be tested and by 2014, four states already had legislation applying to these kinds of cars. 

Moreover, the FDA recently released a statement expressing their approach to the regulation of artificial intelligence in the context of medical devices. At the time of this writing, there is a discussion paper that is open for commentary describing the proposed approach that the FDA may take. They note that the conventional methods of acquiring pre-market clearance for devices may not apply to artificial intelligence. The newly proposed framework adapts existing practices to the context of software improvements.  

Regulation must also be handled with care. Over-limitation of the use and research in artificial intelligence could lead to stifling of development. Laws must be made with knowledge of the potential benefits of new technological advancements could cause. As noted by Gurkaynak, Yilmaz, and Haksever, lawmakers must strike a balance between preserving the interests of humanity and the benefits of technological improvement. Indeed, artificial intelligence poses many challenges for legal scholars.

In the end, artificial intelligence is an exciting technological development that can change the way we go about our daily business. With proper regulation, legislation, and research focus, this technology can be harnessed in a way that benefits the human experience while preserving development and the security of persons.

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

April 18, 2019 at 2:25 pm

The worst humanitarian crisis in the world: war, disease outbreaks and famine in Yemen

leave a comment »

By: Silvia Preite, Ph.D.

Source: Wikimedia

War and natural emergencies in low and middle-income countries often result in the weakening of health systems and relaxation of disease surveillance and prevention, leading to increased risk of infectious disease outbreaks. The over four-year civil war in Yemen continues today and, according to the United Nations (UN), has resulted in the worst on-going humanitarian crisis in the world. Hunger and the spread of communicable diseases affects the vast majority of the Yemeni population.

Overview of the ongoing war in Yemen

Before the start of the conflict in 2015, Yemen was already the poorest country in the Middle East, with debilitated health care systems and poor infrastructures. In March 2015, the Houthi movement took over the government in Sana’a (the capital). In response, a Saudi Arabia and United Arab Emirates-led coalition (supported by several other nations including the United States, the United Kingdom and France) started a military intervention in Yemen, with the intention of restoring the Yemeni government. Overall, this conflict resulted in devastation of agriculture, services, and industry in Yemen. Moreover, in more than four years of air strikes, over 50% of Yemeni hospitals, clinics, water treatment plants and sewage have been continuously bombed. The situation is further worsened by restrictions on food and medicines and limited access to fuel, leaving many essential facilities non-functional, including water sanitation centers. These conditions have led to extreme famine and spreading of diseases, including massive cholera outbreaks among the population. 

Cholera outbreaks

Cholera is a bacterial disease leading to severe diarrhea and dehydration, usually caused by the consumption of contaminated water or food. World-wide, an estimated 2.9 million cases and 95,000 deaths occur each year. It has been estimated that cholera has affected more than 1 million people in Yemen, with more than 2000 deaths, becoming the worst cholera outbreak in the world. According to Médecins sans Frontières (MSF) (known in English as Doctors Without Border) and Physicians for Human Rights, hospitals, mobile clinics, ambulances, and cholera treatment centers continue to be bombed, despite the fact that they have been marked as medical centers and the GPS coordinates have been communicated to the Saudi coalition. In addition to cholera, as a consequence of dropping immunization rates, more than 3000 cases of measles have been reported. Cholera and measles can be prevented by vaccinations and proper health infrastructure. Global eradication efforts have been adopted over the years to eliminate these infections, making the spreading of these diseases in Yemen a significant setback. 

Humanitarian violations

The Fourth Geneva Convention concerns the protection of civilians during conflicts, and has been ratified by 196 states, including parties involved and supporting the war in Yemen. The air strikes on medical centers violate the principles of medical neutrality established by the convention that protects hospitals and health care workers from being attacked. Within the standards of this international law, there is also the right of free mobility of medical personnel within a conflict zone. In contrast, during the civil war in Yemen restrictions have been applied by all involved parties on the activity of medical staff, delivery of health care equipment, essential medicines and vaccines. 

Latest UN report on the Yemen crisis

According to the UN, an estimated 24.1 million people (80% of the total population) need assistance and protection in Yemen, and of those, 14.3 million are in acute need (need help to survive). More than 3 million people are currently internally displaced (IDP), living in desperate conditions in Yemen or elsewhere in the region. It is estimated that 20.1 million people need food assistance, 19.7 million people need basic health care services, and 17.8 million people lack potable water, sanitation and hygiene (WASH). 


An estimated 7.4 million children are in need of humanitarian assistance. Severe children’s rights violations are taking place in Yemen, affecting more than 4000 children and including the risk of being armed and recruited in the war for the boys and child marriage for girls. An estimate of 2 million children are deprived of an education, with around 2,000 schools made unusable by air strikes or occupied by IDPs or armed groups. Upwards of 85.000 children under the age of 5 may have died from severe hunger or other diseases. Overall, according to the UN, at least one child dies every ten minutes in Yemen because of diseases that could be normally prevented, hunger and respiratory infections. 

Urgent need for plans and resolutions

Both famine and disease outbreaks are threatening the Yemeni population and their survival currently relies only on international aid. In February 2019, the United Nations and the Governments of Sweden and Switzerland converged in Geneva to face and discuss the “High-Level Pledging Event for the Humanitarian Crisis in Yemen”. The aim of this meeting was to request international support to alleviate the suffering of the Yemeni people, and they requested $4 billion to provide life-saving assistance. Up to now, 6.3% of the requested budget has been funded; it is encouraging to note that last year UN was able to raise almost 100% of what was initially requested through multiple world-wide donations. 

Along with new funding, the OCHA (UN Office for the Coordination of Humanitarian Affairs), argues that urgent action is needed to prevent any exacerbation of the crisis. The most urgent action to resolve this unprecedented, man-made, medical and humanitarian emergency should come from all the parties involved to end the war and allow the re-establishment of food imports and adequate health services.

As the world barely watches, with only intermittent attention given by the international media, the conflicts and emergency remain. Non-profit and humanitarian organizations (UNICEFMSFWFPSave the Children) have greatly aided the Yemeni population, despite challenging operational environments and the import and circulation restrictions. Moreover, when millions of people, including children, die from hunger and preventable diseases every day, the ethical responsibility of this disaster becomes global and concerns all of us. 

Global implications and future perspectives 

The on-going conflict in Yemen, illustrates how the support of research into innovative global-health solutions is highly needed. When the traditional healthcare system has collapsed and human rights are suspended, we need technologies which further support the victims of war-torn countries to achieve basic sanitary and health standards, beside disease monitoring and vaccination strategies.

We live in an increasingly interconnected world where outbreaks of neglected or re-emerging infectious diseases know no boundaries. Therefore, the consequences of conflicts and disasters in low-middle income countries pose a significant global threat and may affect even stable healthcare systems. Proper evaluation of the causes and consequences of infection outbreaks during the Yemeni conflict is therefore critical for two reasons: devise new strategies to more effectively control and prevent the spread in war-torn areas, and proactively encourage and support countries in regions of conflict to take the necessary measures to minimize the risk of similar humanitarian disasters in the future.

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

April 11, 2019 at 4:29 pm

Taming the “Natural” – Regulating Dietary Supplements and Botanicals in the US

leave a comment »

By: Katelyn Lavrich, Ph.D.

Image by Seksak Kerdkanno from Pixabay

Taking supplements is a part of the American culture, as three out of four Americans use a dietary supplement. An estimated 50,000-80,000 unique supplements are on the market, with over 20,000 botanical supplements in the U.S. marketplace. Dietary supplements are defined as a product intended for ingestion but not representing a complete or conventional food source.  They are marketed in many different forms, including powders, capsules, gummies, and teas. Botanical and herbal supplements are a subset of dietary supplements containing whole plants, parts of plants, powdered plant material or plant extracts. Dietary supplements are usually marketed as beneficial for health and often recommended by physicians, especially as we age. 

In February 2019, the FDA announced“one of the most significant modernizations of dietary supplement regulation and oversight in more than 25 years.” Though plants and herbal remedies have been used in therapeutics since the beginning of civilization, companies are now capitalizing on our obsession with health using ‘natural’ products. The dietary supplement industry is estimated to be over $40 billion a yearand rapidly growing, with some estimates that it could hit over $278 billion by 2024. Of that market, an estimated $7.5 billionwas spent on botanical supplements in 2016. 

As part of the FDA’s announcement, 17 warning letters were sent to companies for illegally marketing products to prevent, treat, or cure Alzheimer’s disease, diabetes, and cancer. This highlights a key distinction between dietary supplements and pharmaceutical drugs. Dietary supplements are substances that are used to add nutrients to the diet or lower the risk of health problems but have not been rigorously tested to help specific diseases. Pharmaceutical drugs go through years of testing for safety and efficacy before ever making it to market. Dietary supplements are regulated in the U.S. under the Dietary Supplement Health and Education Act of 1994 (DSHEA). This law establishes the FDA’s role in regulating safety of botanical supplements once the product is on the market. The FDA can only take action against a misbranded or adulterated supplement after the product is already on the market. The agency tracks side effects reported by consumers and supplement companies, and can take legal action against a manufacturer or distributor if a product is found unsafe. They can also issue a warning or require that the product be removed from the marketplace. DHSEA places the liability on the company to have evidence of safety prior to introducing a drug on the market. No organization must prove efficacy of the product, and the FDA strictly prohibits health claims. This leaves thousands of supplements on the market with little idea of their safety before people consume them.

Dietary supplement safety is not straightforward, especially when evaluating botanical supplements. Botanicals are complex mixtures of chemical constituents and these mixtures can be highly variable based on the plant background and how the plant is processed. In contrast to pharmaceutical drugs that usually only have one active chemical in them, botanical supplements may have several active chemicals in them that can interfere with multiple biological functions all at once to different degrees. Manufacturing processes add a high degree of variability to safety testing of botanical supplements as many are proprietary, and often differ between companies. 

Contaminants can be introduced into botanical products during the manufacturing process. Intentional adulteration of botanical products is a serious concern worldwide. There have been documented cases of botanicals being spiked with pharmaceutical additives to enhance the marketed effect. For examples, stimulants, antidepressants, appetite suppressants, and laxatives have all been found in weight-loss supplements. And phosphodiesterase-5 inhibitors, such as Viagra, have been found in botanical supplements marketed for sexual performance. While intentional adulteration is a concern for the botanical industry, more common are likely non-pharmaceutical contaminants introduced unintentionally during the manufacturing process. 

The FDA is taking steps to improve how they regulate dietary supplements. They’re developing a tool to rapidly alert the public of when products have been shown to contain unlawful or potentially dangerous ingredients. The tool would also alert manufacturers to avoid making or selling these ingredients. The FDA is establishing guidelines requiring the submission of new dietary ingredient (NDI) notifications, giving the agency an opportunity to evaluate the safety of a new ingredient before it comes on market. While these are positives steps to improve safety knowledge to all stakeholders for single ingredients, these actions fail to take into account the potential synergistic effects ofthe mixture of ingredients together in dietary supplements.

Standardization is needed to measure and adjust the ratio of key components to help control batch-to-batch variability. Plant identification is complicated, because many features are lost during the manufacturing process. Several standards have been released including the United States Pharmacopeia (USP), the European Pharmacopoeia and the Pharmacopoeia of the P eople’s Republic of China. While these resources set out guidelines for specifications and tests in Good Manufacturing Practice settings, they are not mandatory for dietary supplements in the U.S.

Because of the high variability between products, reproducibility in research studies using botanical supplements is a major issue. NIH’s National Center for Complementary and Integrative Health released a natural product integrity policy in 2007 to require that all NIH grants address composition of research materials. Any clinical research that is evaluating the efficacy of a botanical product in “curing, treating, or mitigating a disease” shifts the research to the pharmaceutical approval track. Researches must submit an Investigational New Drug (IND) application, forcing exhaustive analysis of the product. However, the IND process makes the composition of botanicals proprietary. Better reporting and more transparency is needed in preclinical research using botanicals. The National Institute of Neurological Disorders and Stroke released a 2012 report recommending better reporting of samples size, randomization methods, and data analysis details. Of the studies they reviewed ,only 15% of clinical trials on popular botanicals (e.g. Echinacea, Ginkgo Biloba, St. John’s Wort) reported that they tested the content of the botanical, and only 4% provided enough information to compare the measure content to the expected content. 

Manufacturers of botanicals are under numerous legal obligations to ensure their products are not adulterated. Some manufacturers pursue external certification from U.S. Pharmacopeia (USP) or NSF international, though USP has only verified 139 productsto date. There is a zero-tolerance policy for all foods imported from outside the US, including botanicals. If any level of pesticide residue is detected, even if it is extremely low, it will be rejected at US ports of entry, even if the botanical is of the highest quality standards and would be allowed on the European market. Manufacturers are urging the EPA to set tolerances for pesticides on imported botanicals, as the zero-tolerance standard is nearly impossible to obtain, especially with new technology enabling extremely low limits of quantitation. 

The rapid growth of the dietary supplement industry is from the increased demand for health and wellness products. Often touted as a “healthy” alternative to pharmaceutical drugs, dietary supplements are plagued by a lack of safety regulations and manufacturing standards. Due to their complexity, toxicologists and risk assessors are alike still working out best practices to measure and interpret findings from mixtures. Collaboration between manufacturing, production, and research agencies is essential to learn how to best provide safe supplements and more information to consumers.

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

March 15, 2019 at 10:46 am