Science Policy For All

Because science policy affects everyone.

Posts Tagged ‘medical ethics

Growing Need for More Clinical Trials in Pediatrics

leave a comment »

By: Erin Turbitt, PhD

Source: Flickr by Claudia Seidensticker via Creative Commons

      There have been substantial advances in biomedical research in recent decades in the US, yet children have not benefited through improvements in health and well-being to the same degree as adults. An illustrative example is that many drugs used to treat children have not been approved for use by the Food and Drug Administration (FDA). Comparatively, many more drugs have been approved for use in adult populations. As a result, some drugs are prescribed to pediatric patients outside the specifications for which they have been approved for use, referred to as ‘off-label’ prescribing. For example, some drugs approved for Alzheimer’s Disease are used to treat Autism in children. The drug donepezil used to treat dementia in Alzheimer’s patients is used to improve sleep quality in children with Autism. Another example is the use of the pain medication paracetamol in premature infants in the absence of the knowledge on the effects among this population. While decisions about off-label prescribing are usually informed by scientific evidence and professional judgement, there may be associated harms. There is growing recognition that children are not ‘little adults’ and their developing brains and bodies may react differently to those of fully developed adults. While doses for children are often calculated by scaling from adult dosing after adjusting for body weight, the stage of development of the child also affects responses to drugs. Babies have difficulties breaking down drugs due to the immaturity of the kidneys and liver, whereas toddlers are able to more effectively breakdown drugs.

The FDA requires data about drug safety and efficacy in children before issuing approvals for the use of drugs in pediatric populations. The best way to produce this evidence is through clinical drug trials. Historically, the use of children in research has been ethically fraught, with some of the early examples from vaccine trials, such as the development of the smallpox vaccine in the 1790s. Edward Jenner, who developed the smallpox vaccine, has famously been reported to have tested the vaccine on several young children including his own without consent from the children’s families. Over the next few centuries, many researchers would test new treatments including drugs and surgical procedures on institutionalized children. It was not until the early 20th century that these practices were criticized and debate began over the ethical use of children in research. Today, in general, the ethical guidance for inclusion of children in research specifies that individuals unable to exercise informed consent (including minors) are permitted to participate in research providing informed consent is gained from their parent or legal guardian. In addition to a guardian’s informed consent, assent (‘affirmative agreement’) of the child is also required where appropriate. Furthermore, research protocols involving children must be subject to rigorous evaluation by Institutional Review Boards to allow researchers to conduct their research.

Contributing to the lack of evidence of the effects of drugs in children is that fewer clinical trials are conducted in children than adults. One study reports that from 2005-2010, there were 10x fewer trials registered in the US for children compared to trials registered for adults. Recognizing the need to increase the number of pediatric clinical trials, the FDA introduced incentives to encourage the study of interventions in pediatric populations: the Best Pharmaceuticals for Children Act (BPCA) and the Pediatric Research Equity Act (PREA). The BPCA delays approval of competing generic drugs by six months and encourages NIH to prioritize pediatric clinical trials for drugs that require further evidence in children. The PREA requires more companies to have pediatric-focused drugs assessed in children. Combined, these initiatives have resulted in benefits such as improving the labeling of over 600 drugs to include pediatric safety information, such as approved use and dosing information. Noteworthy examples include two asthma medications, four influenza vaccines, six medications for seizure disorders and two products for treating migraines. However, downsides to these incentives have also been reported. Pediatricians have voiced concern over the increasing cost of some these drugs developed specifically for children, which have involved minimal innovation. For example, approval of liquid formulations of a drug used to treat heart problems in children has resulted in this formulation costing 700 times more than the tablet equivalent.

A further aspect that must be considered when conducting pediatric clinical trials is the large dropout rates of participants, and difficulty recruiting adequate numbers of children (especially for trials including rare disease populations) sometimes leading to discontinuation of trials. A recent report indicates that 19% of trials were discontinued early from 2008-2010 with an estimated 8,369 children enrolled in these trials that were never completed. While some trials are discontinued for safety reasons or efficacy findings that suggest changes in standard of care, many (37%) are discontinued due to poor patient accrual. There is insufficient research on the factors influencing parental decision-making for entering their child to a clinical trial and research into this area may lead to improvements in patient recruitment for these trials. This research must include or be informed by members of the community, such as parents of children deciding whether to enroll their child in a clinical trial, and disease advocacy groups. The FDA has an initiative to support the inclusion of community members in the drug development process. Through the Patient-Focused Drug Development initiative, patient perspectives are sought of the benefit-risk assessment process. For example, patients are asked to comment on what worries them the most about their condition, what they would consider to be meaningful improvement, and how they would weigh potential benefits of treatments with common side-effects. This initiative involves public meetings held from 2013-2017 focused on over 20 disease areas. While the majority of the diseases selected more commonly affect adults than children, some child-specific disease areas are included. For example, on May 4, 2017 public meeting was held on Patient-Focused Drug Development for Autism. The meeting included discussions from a panel of caregivers about the significant health effects and daily impacts of autism and current approaches to treatment.

While it is encouraging that the number of pediatric trials are increasing, ultimately leading to improved treatments and outcomes for children, there remain many challenges ahead for pediatric drug research. Future research in this area must explore parental decision-making and experiences, which can inform of the motivations and risk tolerances of parents considering entering their child to a clinical trial and potentially improve trial recruitment rates. This research can also contribute to ensuring that clinical trials are ethically conducted; adequately balancing the need for more research with the potential for harms to pediatric research participants.

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

May 24, 2017 at 5:04 pm

Entrusting Your Life to Binary: The Increasing Popularity of Robotics in the Operating Room

leave a comment »

By: Sterling Payne, B.Sc.

Source: Flickr; by Medical Illustration, Welcome Images, under Creative Commons

       Minimally invasive surgery has been around since the late 20th century, however, technological advancement has sent robotic surgeons to the forefront of medicine in the past 20 years. The term “minimally invasive” refers to the performance of a surgery through small, precise incisions a far distance away from the target, thus having less of a physical impact on the patient in terms of pain and recovery times. As one can imagine, surgeons must use small instruments during a minimally invasive procedure and operate with a high-level of control in order to perform a successful operation. In light of these requirements, and due to fast-paced advances in robotics in the last decade, robots have become more common in the operating room. Though their use benefits all parties involved if used correctly, several questions of policy accompany the robotic advance and the goal of fully autonomous surgery.

The da Vinci system is one of the most popular devices used for minimally invasive surgeries, and was approved by the FDA in 2000 for use in surgical procedures. The newest model, the da Vinci Xi® System, includes four separate robotic arms that operate a camera and multiple arrays of tools. The camera projects a 3D view of the environment onto a monitor for the surgeon, who in turn operates the other 3 arms to perform highly precise movements. The da Vinci arms and instruments allow the surgeon more control over the subject via additional degrees of freedom (less restricted movement), and features such as tremor reduction.

Though the da Vinci system is widely used, its success still depends on the skill and experience of the operator. Surgical robotics engineer Azad Shademan and colleagues acknowledged this in a recent publication in Science, highlighting their successful design, manufacturing, and use of the Smart Tissue Autonomous Robot (STAR). The STAR contains a complex imaging system for tracking the dynamic movement of soft tissue, as well as a custom algorithm that allows the robot to perform a fully autonomous suturing procedure. Azad and colleagues demonstrated the effectiveness of their robot by having it perform various stitching procedures on non-living pig tissue in an open surgical setting. Not only did the STAR succeed in both procedures, it outperformed highly experienced surgeons that it was pitted against. More information on the STAR can be found here.

In response to the da Vinci system, Google recently announced Verb Surgical, a joint-venture company with Johnson & Johnson. Verb aims to create “a new future, a future unimagined even a few years ago, which will involve machine learning, robotic surgery, instrumentation, advanced visualization, and data analytics”. Whereas the da Vinci system helps the surgeon perform small, precise, movements, Verb will use artificial intelligence amongst other technologies to augment the surgeon’s view, providing information such as anatomy and various boundaries of bodies such as tumors. A procedure assisted by the da Vinci system can increase the physical dexterity and mobility of the surgeon, however, Verb aims to achieve that and give a “good” surgeon the knowledge and thinking modalities previously confined to expert surgeons gathered over time through hundreds of surgeries. In a way, Verb could level the playing field in more ways than one, allowing all surgeons access to a vast knowledge base accumulated through machine learning.

As proven by the introduction of fully self-driving cars by Tesla in October, autonomous robots are becoming integrated into society; surgery is no exception. A 2014 paper in the American Medical Association Journal of Ethics states that we can apply Isaac Asimov’s (author of I, Robot) three laws of robotics to robot-assisted surgery “if we acknowledge that the autonomy resides in the surgeon”. However, the policy discussion for fully autonomous robot surgeons is still emergent. In the case of malpractice, the doctor performing the operation is usually the responsible party. When you replace the doctor with an algorithm, where does the accountability lie? When a robot surgeon makes a mistake, one could argue that the human surgeon failed to step in when necessary or supervise the surgery adequately. One could also argue logically that the manufacturers should claim responsibility for a malfunction during an automated surgery. Other possibilities include the programmer(s) who designed the algorithms (like the stitching algorithm featured in the STAR), as well as the hospital housing the robot. This entry from a clinical robotics law blog highlights the aforementioned questions from a litigator’s standpoint.

A final talking-point amidst the dawn of autonomous surgical technology is the safeguarding of wireless connections to prevent “hacking” or unintended use of the machine during telesurgery. Telesurgery refers to the performance of an operation by a surgeon who is physically separated from the patient by a long distance, accomplished through wireless connections, at times open and unsecured. In 2015, a team of researchers at the University of Washington addressed the weaknesses of the procedure by hacking into a teleoperated surgical robot, the Raven II. The attacks highlighted vulnerabilities by flooding the robot with useless data, thus making intended movements less fluid, even forcing an emergency stop mechanism. Findings such as this will help with the future development and security of teleoperated surgical robots, their fully autonomous counterparts, and the policy which binds them.

When a web browser or computer application crashes, we simply hit restart, relying on autosave or some other mechanism to preserve our previous work. Unlike a computer, a human has no “refresh” button; any wrongful actions that harm the patient cannot be reversed, placing a far greater weight on all parties involved when a mistake is made. As it stands, the policy discussion for accountable, autonomous robots and algorithms is gaining much-needed momentum as said devices inch their way into society.

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

November 24, 2016 at 9:00 am

Posted in Essays

Tagged with , , ,

Science Policy Around the Web – June 12, 2015

leave a comment »

By: Varun Sethi, MD, PhD

Technology and Medical Ethics

AMA Tackles Ethics of Telemedicine

A report by the Council on Ethical and Judicial Affairs (CEJA) was recently supported at a reference committee hearing but failed to get approval from the American Medical Association (AMA) House of Delegates. This report recommended that physicians providing clinical services via telemedicine must uphold the standards of professionalism as expected in in-person examinations, and be cognizant of the limitations of relevant technologies.

Some of the concerns were about parts of the recommendations dealing with informed consent. A delegate of the American College of Radiology (ACR), Todd M. Hertzberg MD, explained that there were scenarios in context of teleradiology and telepathology where in the process of informed consent was not in place and/or was less specific. A delegate from Texas, Arlo F. Weltge, MD, PhD, MPH, agreed that telemedicine is emerging as being clearly important, but expressed concern about a current case where in the Texas Medical Board was being sued for an interpretation. Weltge explained that the interpretation of the AMA code on telemedicine ethics could imply that, “anybody can set up a remote station and prescribe medications and, if you will, become an internet pill-mill.”

Other delegates supported the code and emphasized that Telemedicine is the future, and the future is here! Nonetheless, there is a strong need for ethical guidelines and these are needed as a priority. (Sarah Wickline Wallen, MedPage Today)

Biomedical Research Funding

Study claims $28 billion a year spent on irreproducible biomedical research

Economists report that the exorbitant amount of $28 billion is spent each year on irreproducible preclinical research in the United States. By reviewing literature for over two dozen studies, economists estimated that about 53% of preclinical studies have errors and are thus not reproducible. The source of these ‘errors’ varied from problems with reagents and reference materials (36%), problems with study design (28%), errors in data analysis and reporting (25%), and laboratory protocols (11%). With an estimated 56 billion dollars spent by NIH and US public and private funders, ~ 50% of this total (i.e. 28 billion dollars) was used for ‘irreproducible research’.

While the NIH has issued new criteria to strengthen the reproducibility of funded research, the authors of this report suggest that irreproducible research is not necessarily a ‘waste’. They emphasize that investment should be increased, with a relatively small part dedicated for improving the reproducibility rate of research. Other strategies include improving training researchers in the realms of study design, and stressing the use of validated reagents only etc. Microbiologist Ferric Fang is concerned about the possibility to extrapolate findings from a few studies. Calling this report ridiculous and unhelpful, Fang stressed that an irreproducible result does not imply that the original result was incorrect. (, ScienceInsider)

Research and Collaboration Policy

Funders must encourage scientists to share

Following the precedent established by the Human Genome Project, researchers and scientists agree that it is important to share large data sets (e.g. genomics, epidemiology, population level health), in order to realize the full potential and maximize benefits of that data. A recent survey reported that both the providers as well as those who use the shared data, are frustrated with the data-access process. Access protocols are very specific and tailored to different studies, augmenting the administrative burden.

An expert advisory group has published recommendations to aid researchers. They suggest that data access plans must be incorporated into the grant application process. Funders should be encouraged to standardize the process, allowing for flexibility of individual study characteristics; access procedures should be transparent, straightforward and allow for an independent appeal process to settle disputes. Participants in studies can also be protected better if data access provisions are planned at the onset, e.g. permission to share de-identified data could be incorporated in the consent form. To encourage scientists to contribute to data sharing, rewards could also be used as a motivation, the group suggested.

To protect the providers of data, it is also important to allow for justifiable restrictions for hard earned data sets. The group stressed the need for a clear explanation of the conditions imposed. Significant breaches of data/ material transfer agreements should also be treated seriously so as to act as a deterrent to such practice. In an era of international and collaborative science, scientists need to be encouraged to volunteer to share data and must also be made to feel protected. (Martin Bobrow, Nature Column: World View)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

June 12, 2015 at 9:00 am