Science Policy For All

Because science policy affects everyone.

Posts Tagged ‘ethics

Gene editing- Regulatory and ethical challenges

leave a comment »

By: Chringma Sherpa, Ph.D.

Image by Colin Behrens from Pixabay 

When power is discovered, man always turns to it. The science of heredity will soon provide power on a stupendous scale; and in some country, at some point, perhaps, not distant, that power will be applied to control the composition of a nation. Whether the institution of such control will ultimately be good or bad for that nation, or for humanity at large, is a separate question.

William Bateson, English biologist who coined the term “genetics.”

On November 25, 2018, in an allegedly leaked YouTube video, He Jiankui, a scientist at the Southern University of Science and Technology in Shenzhen, China, revealed the birth of the first gene-edited babies using a technology called CRISPR. There has been a general consensus in the scientific community that heritable changes should not be made to prevent the off-target and unwanted genetic changes artificially produced in an individual during gene editing to be passed on to his/her offspring(s). He became the first scientist to publicly violate this consensus resulting in an international scandal and criminal/ethics investigations into both He and his collaborators.

In the wake of He’s CRISPR-babies scandal, scientists worldwide are debating on the ethical and regulatory measures that would discourage another wayward and rogue scientist like He from attempting such an irresponsible feat.  At the 2nd international summit on human gene editing that convened two days after He’s video became public, He presented his work. The summit was well attended by ethicist and journalist besides scientists. At the summit, David Baltimore of the California Institute of Technology, who chaired the organizing committees for both the 1st and 2nd international summits on human gene editing read one of the conclusions from the 1st summit held at Washington DC in 2015 – “It would be irresponsible to proceed with any clinical use of germline editing unless and until (i) the relevant safety and efficacy issues have been resolved, based on appropriate understanding and balancing of risks, potential benefits, and alternatives, and (ii) there is broad societal consensus about the appropriateness of the proposed application”. Baltimore called He’s work outright irresponsible on the basis of the statement from the 1st summit. At the summit, many other ethical and safety-related questions were raised which He failed to answer or did not answer convincingly. 

He’s scandal has driven various organizations to draft new guidelines and sanctions aimed at preventing unethical and unapproved use of genome editing.  China has imposed new laws requiring human gene editing projects to be approved by China’s health ministry first to avoid fines and blacklists. Both the 2nd human gene editing summit and the WHO panel that convened in March 2019, have proposed a central registry of human gene-editing research and called for an international forum/ committee to devise guidelines for human gene editing based on common norms and differences of opinions between countries.  To allow time for the creation and effective implementation of new regulations, the WHO also called for a global moratorium on heritable editing of human eggs, sperm, or embryos for the next five years. Supporting the WHO panel’s recommendations, Francis Collins, director of the National Institute of Health, said that “NIH strongly agrees that an international moratorium should be put into effect immediately”. However, not all scientists are in favor of a moratorium, as they believe it might stifle the growth of a technology that might be safe and beneficial in the near future. Jennifer Doudna of the University of California, Berkley, one of the co-inventors of CRISPR gene editing, says that she prefers strict regulation that precludes the use of germline editing until scientific, ethical, and societal issues are resolved over a moratorium. David Baltimore agrees with Doudna stating that the word moratorium was intentionally not used in both the human gene editing summits as a moratorium would be hard to reverse.  Science historian Ben Hurlbut of Arizona State University, who had numerous discussions with He before Lulu and Nana were created, thinks a blanket moratorium on clinical germline editing would have prevented He from proceeding. Both the two human gene editing summits and a 2015 essay by Baltimore, Doudna, and 16 co-authors had already outlined numerous guidelines for clinical germline editing. According to Hurlbut, He weighed these criteria and believing that his procedure met all the guidelines proceeded. A categorical prohibition of germline editing would not have allowed him to use his subjective judgment and act out of self-interest. 

The modern debate over CRISPR editing is not the first time the scientific community has come together to discuss game-changing biological technologies, and it is heavily informed by two prior events. In 1970, Paul Berg and his postdoctoral researcher David Jackson used the recombinant DNA technology to create the first chimeric DNA. This invention created an uproar among the scientists and the general public who feared that this technology would lead to the creation of uncontrollable and destructive superbugs, the exaggerated versions of which can be seen in some science fiction movies. Yielding to the opinions and sentiments of the fellow scientists, Berg held himself from cloning such recombinant DNAs and in 1974, he pleaded for a voluntary moratorium on certain kinds of recombinant DNA research until their safety issues have been resolved.  He also moved quickly to organize the Asilomar conference (Asilomar II) in 1975 that bore semblance to the 2nd human gene editing conference in that it invited not only the scientists but lawyers, ethicists, writers, and journalists to weigh in on the risk-benefit analysis of the Recombinant DNA technology. On the recommendation of Asilomar conference, Donald Fredrickson, then director of the National Institutes of Health (NIH), initiated the formation Recombinant DNA Advisory Committee (RAC) to act as a gatekeeper of all research that involved recombinant DNA technology. The scope of the committee, which was composed of stakeholders, including basic scientists, physicians, ethicists, theologians, and patients’ advocates was later expanded to encompass the review and approval of human gene therapy research. Due to the redundancies of regulatory oversights between the US Food and Drug Administration (FDA) and RAC, RAC was reinstated as only an advisory body providing advice on the safety and ethical issues associated with emerging biotechnologies in 2019.

While this is a successful example of scientific self-regulation, the second event resulted in a major setback in the field of gene therapy. On September 13, 1999, Mark Batshaw and James Wilson of University of Pennsylvania supervised the administration of adenovirus to an 18-year-old Jesse Gelsinger in a gene therapy clinical trial. Gelsinger died of liver and kidney failure and brain damage three days later. Like the birth of CRISPR babies, Gelsinger’s death was an instance where new technology was used prematurely without a thorough assessment of its safety profile. It is suspected that both the clinical applications headed by He and Wilson might also have been motivated by fame and financial gain; He and Wilson both had financial stakes in private biotechnology companies that would benefit from these human trials. In the aftermath of Gelsinger’s death, Wilson was banned from carrying out FDA regulated clinical trials for the next five year, nearly all gene therapy trials were frozen, and many biotechnology companies carrying out these trails went bankrupt. This was a dark period in the history of gene therapy, and it would take almost another decade of introspection, reconsideration, and more basic experimentation for gene-therapy to re-emerge as a viable therapeutic strategy.

Figure 1: The regulatory status of human germline gene modification in various countries. Thirty-nine countries were surveyed and categorized as “Ban based on legislation” (25, pink), “Ban based on guidelines” (4, faint pink), “Ambiguous” (9, gray), and “Restrictive” (1, light gray). Non-colored countries were excluded in this survey. Adapted from Araki, M. and Ishii, T (2014): “International regulatory landscape and integration of corrective genome editing into in vitro fertilization” Reproductive Biology and Endocrinology, 2014 12:108

Scientists at both the Asilomar and human gene editing conferences passionately debated the safety of the relevant technologies but deliberated on the discussion of the big ethical issue associated with these technologies – the ultimate creation of designer babies. That gene editing sits on the slippery slope to eugenics was recognized since the days of Charles Darwin and Gregor Mendel when the study of genes and heredity was still in its infancy and the discovery of DNA as the genetic material was half a century away. One of the earliest proponents of genetic manipulation for human benefits was Francis Galton, Charles Darwin’s cousin. Galton proposed an unnatural and accelerated selection of beneficial traits by marriage between people of desirable traits. The danger that someday some rogue scientists might use germline gene editing technology in favor of eugenics lurks in the mind of those who understand the potential of the currently available gene editing technologies. However, more fearful is the idea that the wave of positive eugenics would soon give way to negative eugenics – elimination of undesirable traits as it did around World War II as exemplified by the famous case of Carrie Buck, a woman who was designated “mentally incompetent” and involuntarily sterilized. 

Various countries have their own regulation and legislation on germline editing to prevent any backlash from this powerful technology. Figure 1 presents a summary of the regulatory landscape of germline gene modification surveyed in thirty-nine countries by Araki Motoko and Tetsuya Ishii.  In the US, Congress has shown strong support against germline gene editing. In 1996, it passed a rider as part of the annual appropriations bill that prohibits the use of federal funds for any research involving human embryo. In another appropriations bill passed in 2015, Congress banned the FDA from considering applications involving the therapeutic modification of the human germline. 

Human gene editing holds great promises in treating many life-threatening and previously intractable diseases. Only when this discipline of science is held to high ethical standards and regulated sensibly at international, national, and a personal level, shall we reap the benefits of this powerful technology.

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

May 29, 2019 at 9:25 am

Posted in Essays

Tagged with , , ,

Science Policy Around the Web – April 24, 2019

leave a comment »

By: Patrick Wright, PhD

Image by mohamed Hassan from Pixabay 

Why Some Anti-bias Training Misses the Mark

A new study published in the Proceedings of the National Academy of Sciences(PNAS) entitled “The mixed effects of online diversity training” reports that online diversity-training programs aimed at reducing gender and racial bias among employees do not substantially affect workplace behavior, particularly among male employees.

The study cohort consisted of 3,016 volunteers (61.5% men) that were all salaried employees across 63 nations of a single global professional-services business. Each participant was randomly assigned to one of three anti-bias sessions: gender bias training, general-bias training, and a control group that received no bias-specific training. Training for the treatment conditions was divided into five sections, including “What are [gender] stereotypes and why do they matter?” and “How can we overcome [gender] stereotypes?” (the word “gender” was excluded from general-bias training sessions). On the other hand, the control condition contained sections such as “Why is inclusive leadership important?” and “What makes teams more inclusive?”; bias nor stereotyping were ever explicitly mentioned. 

Authors acquired data on attitudinal shifts and behavioral changes for up to five months after the training. All volunteers were asked to complete a follow-up survey to help address inequalities that women and racial minorities face in the workplace. Additionally, one a week for 12 weeks after completion of training, employees were texts that included such comments as “Have you used any inclusive leadership strategies this week? Respond Y for Yes and N for No”

Interestingly, authors observed no positive shifts in behavior among male volunteers. Only members of groups that are commonly impacted by bias (e.g. under-represented minorities) were observed to change their behavior. Lead author Edward Chang summarized this finding: “The groups that historically have had more power – white people and men – didn’t move much”. Women volunteers who participated in the training sought mentorship from senior colleagues and offered mentorship to junior female colleagues after the sessions. 

Chester Spell, a Professor of Management in at the Rutgers School of Business in Camden, New Jersey who studies behavioral and psychological health in organizations, believes that for diversity training to be truly impactful, it “has to be part of the DNA of an organization, not an appendix.” Organizations must show that they are serious about fighting bias through a committing to offering many initiatives aimed at educating about the presence and effects bias. Recently, in Spring of 2018, Starbucks closed 8,000 stores on a Tuesday afternoon for a four-hour anti-bias training, specifically racial tolerance, for employees This was in response to a prior incident in which a Philadelphia-area Starbucks café manager call to police resulted in the arrests of two black men who were in the café waiting for a friend. However, Starbucks did not comment on future training plans. 

The most effective means of implementation for anti-bias training plans are still not established. This is an active area of ongoing area of research, especially regarding the idea delivery method and number of sessions. Bezrukova et al, described in a 2016 meta-analysis spanning 40 years on the impact of diversity training, observed little effect of stand-alone diversity trainings on employees’ attitudes toward bias. Offering repeated or longer training sessions that are complemented with other approaches, including deciding hiring criteria prior to candidate evaluation, may be the best approaches going forward. However, individuals in academia have a more favorable opinion and are more receptive of these trainings than those from the business sector. Ülger and colleagues reported in a meta-analytic review across 50 studies of in-school interventions on attitudes toward outgroup members ((members of different ethnic, religious, age groups etc.) that statistically significant, moderate changes in outgroup attitudes can be obtained via anti-bias programs in school. However, there was no evidence that teacher-led or media-based interventions produce positive outcomes compared to the positive outcomes achieved by researcher-led interventions. Notably, one-on-one interventions were the most impactful. 

 (Virginia Gewin, Nature)

Universities Will Soon Announce Action Against Scientists Who Broke NIH Rules, Agency Head Says

During a recent Senate Appropriations Subcommittee hearing in early April, Dr. Francis Collins, Director of the National Institutes of Health (NIH), described that over the rest of the month, many universities will announce action against faculty members who did not comply with agency rules on protecting the confidentiality of peer review, handling intellectual property, and disclosing foreign ties. Dr. Collins told Senator Roy Blunt (R-MO), chair of the subcommittee, that there are ongoing investigations at more than 55 U.S. institutions and that some scientists have been deemed guilty of not disclosing foreign funding for work that was also being supporting by NIH. 

The push to systematically uncover potential violations of these intellectual property and confidentiality rules began in August 2018, when Dr. Collins wrote the 10,000 institutions receiving NIH funding to request them to look for any instances of concerning behavior. Dr. Collins spoke of faculty researchers already being fired: “There are increasing instances where faculty have been fired, have been asked to leave the institution, many of them returning back to their previous foreign base.” For example, the MD Anderson Cancer center, part of the University of Texas system, announced last week that they have fired three senior researchers that committed potentially “serious” violations of rules involving confidentiality of peer review and foreign ties disclosure after they were identified by NIH. 

However, both Dr. Collins and Senator Blunt emphasized that this is not a pervasive problem; most foreign scientists working in the United States and funded by the NIH follow funding and disclosure rules. “We need to be careful that we don’t step into something that almost seems a little like racial profiling.”, Dr. Collins stated at the hearing. 

 (Jocelyn Kaiser, Science)



Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

April 25, 2019 at 10:06 am

Science Policy Around the Web – April 19, 2019

leave a comment »

By: Neetu Gulati, PhD

Image by Raman Oza from Pixabay 

Scientists Restore Some Function in the Brains of Dead Pigs 

Hours after the animals were killed, scientists have partially revived the brains of dead pigs. This contradicts the dogma surrounding death. Cutoff from oxygen, the brain of a mammal is supposed to die after about 15 minutes. The process was thought to be widespread and irreversible: after the cells in the brain die they cannot be brought back. A study published in Nature has challenged this dogma. While none of the tested brains regained signs of consciousness, the Yale researchers were able to demonstrate that cellular function was either preserved or restored.

The study used 32 brains from pigs that were slaughtered for food. After waiting four hours, well past the 15 minutes of oxygen deprivation needed to “kill” the brain, they hooked them up to a system that pumped in a cocktail of specially formulated nutrients and chemicals called BrainEx for six hours. Compared to brains not given the BrainEx, the treated brains had more preserved structure and less cell death, and some cellular functions were restored. Nevertheless, Nenad Sestan, the lead researcher on the project, was quick to point out that while the brains had some restored activity, “this is not a living brain.”

In fact, the goal of the study was not to restore consciousness, which could lead to many ethical concerns. The scientists monitored electrical activity in the brains and intended to stop any signs of consciousness that may have been detected. Stephen Latham, a bioethicist that worked with the team explained that they would need more ethical guidance before trying any studies that altered consciousness in the pigs brain. To avoid this, the BrainEx cocktail also included a drug known to dampen neuronal activity.

The implications of this study are vast. The breakthrough will hopefully create a better link between basic neuroscience and clinical research, but also even with the ethical considerations, it is likely that people will eventually want to apply this technology to human brains. It may lead to interesting policy discussions, because currently, while there are many restrictions on what can be done with living research animals or human subjects, there are much fewer restrictions on the dead. It may also affect organ transplantation efforts from brain-dead individuals, as they may eventually become candidates for brain revival. A lot still needs to be investigated in the meantime, but the implications are vast and mind-blowing.

(Nell Greenfieldboyce, NPR)

Darkness Visible, Finally: Astronomers Capture First Ever Image of a Black Hole

Last week it was announced that scientists had captured the image of the shadow of a black hole for the first time in history. The image is the result of an international collaboration consisting of 200 members of the Event Horizon Telescope team. The results were simultaneously announced at news conferences in six locations around the world, including at the National Science Foundation

The data was collected over a 10-day period around the world using eight telescopes, focused on Messier 87 (M87), a giant galaxy within the constellation Virgo. It is within M87 that a black hole billions of times larger than the sun was visualized. After collecting data, it took two years of computer analysis to produce the blurry image of a lopsided ring of light around a dark circle.

Black holes like the one found in M87 are supermassive dense objects that gravity pulls so strongly that no matter can escape. According to Einstein’s principles of general relativity, the collapse of space-time within a black hole can even prevent light from escaping. The first official proof of the existence of black holes came in 2016 when LIGO detected the collision of a pair of black holes. Now, merely three years later, the world has photographic evidence, and features of the black hole can be determined, including its mass: 6.5 solar masses, heavier than most pervious determinations.

Moving forward, the Event Horizon Telescope partnership plans to continue observations of M87 and collect data of other regions of space. The telescope network also continues to expand: earlier this year another telescope was added to the collaboration, with more antennas also expected to join soon. The collaboration will continue to observe black holes and monitor their behavior to see how things change.

(Dennis Overbye, New York Times


Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

April 21, 2019 at 12:10 pm

The need for regulation of artificial intelligence

leave a comment »

By: Jayasai Rajagopal, Ph.D.


Source: Wikimedia

The development and improvement of artificial intelligence (AI) portends change and revolution in many fields. A quick glance at the Wikipedia article on applications of artificial intelligence highlights the breadth of fields that have already been affected by these developments: healthcare, marketing, finance, music and may others. As these algorithms increase their complexity and grow in their ability to solve more diverse problems, the need to define rules by which AI is developed becomes more and more important.

            Before explaining the potential pitfalls of AI, a brief explanation of the technology is required. Attempting to define artificial intelligence begs the question of what is meant by intelligence in the first place. Poole, Mackworth and Goebel clarify that for an agent to be considered intelligent, they must adapt to their surrounding circumstances, learn from changes in those circumstances, and apply that experience in pursuit of a particular goal. A machine that is able to adapt to changing parameters, adjust its programming, and continue to pursue a specified directive is an example of artificial intelligence. While such simulacra are found throughout science fiction, dating back to Mary Shelly’s Frankenstein, they are a more recent phenomenon in the real world. 

            Development of AI technology has taken off within the last few decades, as computer processing power has increased. Computers began successfully competing against humans in chess as early as 1997 with DeepBlue’s victory over Garry Kasparov. In recent years, computers have started to earn victories in even more complex games such as Go and even video games such as Dota 2. Artificial intelligence programs have become common place for many companies which use them to monitor their products and improve the performance of their services. A report in 2017 found that one in five companies employed some form of AI in their workings. Such applications are only going to become more commonplace in the future.

In the healthcare field, the prominence of AI is readily visible. A report by BGV predicted a total of $6.6 billion invested into AI within healthcare by the year of 2021. Accenture found that this could lead to saving of up to $150 billion by 2026. With the recent push towards personalized and precision medicine, AI can greatly improve the treatment and quality of care. 

However, there are pitfalls associated with AI. At the forefront, AI poses a potential risk for abuse by bad actors. Companies and websites are frequently reported in the news for being hacked and losing customer’s personal information. The 2017 WannaCry attack crippled the UK’s healthcare system, as regular operations at many institutions were halted due to their compromised data infrastructures. While cyberdefenses will evolve with the use of AI, there is a legitimate fear that bad actors could just as easily utilize AI in their attacks. Regulation of use and development of AI can limit the number of such actors that could access those technologies.

Another concern with AI is the privacy question associated with the amount of data required. Neural networks, which seek to imitate the neurological processing of the human brain, require large amounts of data to reliably generate their conclusions. Such large amounts of data need to be curated carefully to make sure that identifying information that could compromise the privacy of citizens is not easily divulged. Additionally, data mining and other AI algorithms could information that individuals may not want revealed. In 2012, a coupon suggestion algorithm used by Target was able to discern the probability that some of their shoppers were pregnant. This proved problematic for one teenager, whose father wanted to know why Target was sending his daughter coupons for maternity clothes and baby cribs. As with the cyberwarfare concern, regulation is a critical component in protecting the privacy of citizens.

Finally, in some fields including healthcare, there is an ever present concern that artificial intelligence may replace some operations entirely. For example, in radiology, there is a fear that improvements in image analysis and computer-aided diagnosis by the use of neural networks could replace clinicians. For the healthcare field in particular, this raises several important ethical questions. What if the diagnosis of an algorithm disagrees with a clinician? As the knowledge an algorithm has is limited by the information it is exposed to, how will it react when a unique case is presented? From this perspective, regulation of AI is important not only to address practical concerns, but also pre-emptively answer ethical questions.

While regulation as strict as the Asmiov’s Three Laws may not be required, a more uniform set of rules governing AI is required. At the international level, there is much debate among the members of the United Nations as to how to address the issue of cyber security. Other organizations, such as the European Union, have made more progress. A document recently released by the EU highlights some ethical guidelines which may serve as the foundation for future regulations. At the domestic level, there has been a push from scientists and leaders in the field towards harnessing the development of artificial intelligence for the good of all. In particular, significant headway has been made in the regulation of self-driving cars. Laws passed in California restrict how the cars can be tested and by 2014, four states already had legislation applying to these kinds of cars. 

Moreover, the FDA recently released a statement expressing their approach to the regulation of artificial intelligence in the context of medical devices. At the time of this writing, there is a discussion paper that is open for commentary describing the proposed approach that the FDA may take. They note that the conventional methods of acquiring pre-market clearance for devices may not apply to artificial intelligence. The newly proposed framework adapts existing practices to the context of software improvements.  

Regulation must also be handled with care. Over-limitation of the use and research in artificial intelligence could lead to stifling of development. Laws must be made with knowledge of the potential benefits of new technological advancements could cause. As noted by Gurkaynak, Yilmaz, and Haksever, lawmakers must strike a balance between preserving the interests of humanity and the benefits of technological improvement. Indeed, artificial intelligence poses many challenges for legal scholars.

In the end, artificial intelligence is an exciting technological development that can change the way we go about our daily business. With proper regulation, legislation, and research focus, this technology can be harnessed in a way that benefits the human experience while preserving development and the security of persons.

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

April 18, 2019 at 2:25 pm

Science Policy Around the Web – November 20, 2018

leave a comment »

By: Andrew Wright, B.S.

NASAburningbrazil

Source: Wikimedia

 

Habitat Loss Threatens All Our Futures, World Leaders Warned

Recent reports have suggested that humanity has only 12 years to avoid catastrophic environmental collapse due to 1.5C of industrial warming. While solutions to the threat of runaway climate change have been given a new sense of urgency by these findings, there exists a commensurate and oft less visited issue, that of rapid declines in global biological diversity. Driven primarily by agricultural land conversion of terrestrial and marine ecosystems (via forest clearing and river damming, respectively), vertebrate species have declined by 60% on average since 1970 according to the World Wildlife Fund’s most recent Living Planet Report . While this decline appears strongest in South and Central America and in freshwater habitats, the report joins a compendium of literature suggesting holistic declines in biodiversity among birds, insects, fish, and terrestrial vertebrates as part of an ongoing anthropogenic mass extinction event.

To address some of the issue, the UN Convention on Biological Diversity (CBD) is currently meeting in Sharm El Sheikh, Egypt to discuss progress on the Aichi biodiversity targets for 2020.  These targets came out of The Convention on Biological Diversity, a multilateral treaty signed in 1992 focused on preserving biodiversity, sustainable use of biological resources, and equitable sharing of resources. The Aichi biodiversity targets specified that people would be aware of risks to biodiversity, and that biodiversity values would be adopted by public, private, and governmental entities by 2020. Given the rapidity, intensity, and ubiquity of the decline in species, most, if not all, of these targets will likely be missed. As such, the delegates from the 196 signatory nations will also work on creating new biodiversity targets to be finalized at the next CBD meeting in China.

Since a comprehensive solution seems necessary given the increasingly global nature of trade, authors of the new targets hope to garner a greater deal of international attention, and intend to make the case that government commitment to reversing or pausing biodiversity loss should receive equivalent weight as action on climate change.

(Jonathan Watts, The Guardian)

The Ethical Quandary of Human Infection Studies

The United States has greatly improved its ability to soundly regulate the ethics of clinical studies since the infamous malfeasance of the Tuskegee syphilis study. Most significantly, the National Research Act of 1974 established the Institutional Review Board to address how to adequately regulate the use of human subjects by adhering to the principles of respect for persons, beneficence, and justice.

The National Research Act provided a substantial step forward and provided a clear requirement for universal informed consent. However, the expansion of clinical studies to new international regions of extreme poverty, due in part to the influx of private money from large charitable organizations, has come with novel ethical considerations. In these newly explored populations where income, education, and literacy levels may be lower, emphasis is now being place on how to recruit volunteers without implicitly taking advantage of their circumstances.

One area of concern is compensation levels. While compensation in a malaria infection study in Kenya was tied to the local minimum wage, the number of volunteers recruited far surpassed expectations. This may have been due to the fact that payment during this study was guaranteed and consistent, in contrast to local work.

Aware of the concern, two of the largest private medical research funding organizations, the Bill and Melinda Gates foundation and the Wellcome Trust have recently instituted ethical guidelines putatively reinforcing the principle of beneficence, placing special emphasis on maximizing benefits over risk. It is an open question whether these protections will be sufficient, but at the very least it is important that rules to be put in place proactively rather than as a reaction.

 

(Linda Nordling, Undark/Scientific American)

 

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

November 20, 2018 at 11:58 am

Science Policy Around the Web – October 19, 2018

leave a comment »

By: Ben Wolfson, Ph.D.

swamp-718456_1280

Source: Pixabay

Climate Change

 

Climate Change prompts a rethink of Everglades management

The Florida Everglades is a large area of tropical wetlands that has received significant attention due to the degradation of its unique ecosystem by urban development. The Everglades were designated a World Heritage Sitein 1979 and Wetland Area of Global Importancein 1987, and in 2000 Congress approved the Comprehensive Everglades Restorative Plan (CERP) to combat further decline and provide a framework for Everglades restoration.

For the past 18 years, these efforts have been directed towards curtailing damage from urbanization and pollution. However, as outlined in a congressionally mandated report released on October 16th by the National Academies of Science, Engineering, and Medicine, new strategies may be necessary. In the biennial progress report, an expert panel called for CERP managers to reassess their plans in light of new climate change models. The report focuses on the 7 centimeters of sea level rise seen since 2000, and points out that Southern Florida is especially at risk from climate change and is expected to experience a 0.8-meter rise in sea level by the year 2100.

It is clear that as more is learned about the realities of climate change, the goals and methods of conservation projects are shifting, and past strategies must be adapted to fit the realities of a warming world.

(Richard Blaustein, Science)

Animal Research

NIH announces plan for chimp retirement

 

In 2015, the NIH announced that it would no longer support biomedical research on chimpanzees, two years after pledging to significantly reduce the numbers of chimpanzees used in research. These decisions were made based on a combination of reduced demand for chimpanzees in research and the designation of captured chimpanzees as an endangered species in 2015.

On Thursday October 18th, the NIH announced the next step in the process of retiring research chimps. While research was stopped in 2015, many of the chimpanzees had nowhere to go and remained housed at laboratories. One federal chimpanzee sanctuary, Chimp Haven, exists in Keithville, Louisiana, however lack of space and the difficulty of relocating some animals has slowed their transition to better habitats.

In the Thursday announcement NIH director Francis Collins outlined the guidelines for future chimpanzee relocation. These include streamlining medical records and determining whether chimpanzees are physical healthy enough to be relocated. Many of the chimpanzees are at an advanced age, meaning they have developed chronic illnesses similar to those experienced by humans. However, Collin’s emphasized that there must be a more acute medical problem for relocation not to take place. In addition both the research facility and Chimp Haven must agree that the former research chimpanzees are capable of being relocated, and disagreements will be mediated by a panel of outside veterinarians.

Collins additionally stressed that while transfer to Chimp Haven is the ideal outcome for all retired chimps, those housed at NIH-supported facilities do not live isolated in cages or in laboratories and are housed in social groups with appropriate species-specific accommodations.

The development of these clear guidelines will expediate chimpanzee relocation while emphasizing chimpanzee health and comfort.

(Ike Swetlitz, Statnews)

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

October 19, 2018 at 3:25 pm

Science Policy Around the Web – May 1, 2018

leave a comment »

By: Liu-Ya Tang, PhD

20180501_Linkpost

source: pixabay

Artificial Intelligence

With €1.5 billion for artificial intelligence research, Europe pins hopes on ethics

While artificial intelligence (AI) brings convenience to modern life, it may cause some ethical issues. For example, AI systems are generated through machine learning. Systems usually have a training phase in which scientists “feed” them existing data and they “learn” to draw conclusions from that input. If the training dataset is biased, the AI system would produce a biased result. To put ethical guidelines on AI development and catch up with the United States and China in AI research, the European Commission announced on April 25 that it would spend €1.5 billion to AI research and innovation until 2020.

Although the United States and China have made great advances in the field, the ethical issues stemming from AI may have been neglected as both practice “permissionless innovation”, said Eleonore Pauwels, a Belgian ethics researcher at the United Nations University in New York City. She spoke highly of Europe’s plan, which is expected to enhance fairness, transparency, privacy and trust. But the outcome is still unknown. As said by Bernhard Schölkopf, a machine learning researcher at the Max Planck Institute for Intelligent Systems in Tübingen, Germany, “We do not yet understand well how to make [AI] systems robust, or how to predict the effect of interventions”. He also mentioned that only focusing on potential ethical problems would impede the AI research in Europe.

What are the reasons why the AI research lags behind the United States and China? First of all, Europe has strong AI research, but a weak AI industry. Startup companies with innovative technologies, which are oftentimes risky, cannot receive enough funds as the old industrial policies favor big, risk-averse firms. So the commission’s announcement underscores the importance of public-private partnerships to support new technology development. The second reason is that salaries are not high enough to keep AI researchers in academia as compared to the salaries in the private sector. To solve this problem, a group of nine prominent AI researchers asked governments to set up an intergovernmental European Lab for Learning and Intelligent Systems (ELLIS), which would be a “top employer in machine intelligence research” and offer attractive salaries as well as “outstanding academic freedom and visibility”.

(Tania Rabesandratana, Science)

Public health

Bill Gates calls on U.S. to lead fight against a pandemic that could kill 33 million

Pandemic diseases, mainly caused by cholera, bubonic plague, smallpox, and influenza, can be devastating to world populations. Several outbreaks of viral diseases have been reported in scattered areas around the world, including  the 2014 Ebola epidemic, leading to growing concerns about the next wave of a pandemic. During an interview conducted last week, Bill Gates discussed the issue of pandemic preparedness with a reporter from The Washington Post. Later, he gave a speech on the challenges associated with modern epidemics before the Massachusetts Medical Society.

The risk of a pandemic is high, as the world is highly connected and new pathogens are constantly emerging as consequences of naturally occurring mutations. Modern technology has brought on the possibility of bioterrorism attacks. In less than 36 hours, infectious disease and pathogens can travel from a remote village to major cities on any continent to become a global crisis. During his speech, Gates cited a simulation done by the Institute for Disease Modeling, which estimates that nearly 33 million people worldwide could be killed by a highly contagious and lethal airborne pathogen like the 1918 influenza. He said “there is a reasonable probability the world will experience such an outbreak in the next 10-15 years.” The risk becomes higher when local government funding for global health security is not adequate. The U.S. Centers for Disease and Prevention is planning to dramatically downsize its epidemic prevention activities in 39 out of 49 countries, which would make these developing countries even more vulnerable to the outbreaks of infectious diseases.

Gates expressed this urgency to President Trump and senior administration officials at several meetings, and he also announced a $12 million Grand Challenge in partnership with the family of Google Inc. co-founder Larry Page to accelerate the development of a universal flu vaccine. He highlighted scientific and technical advances in the development of better vaccines, antiviral drugs and diagnostics, which could provide better preparation for, prevention of and treatment of infectious disease. Beyond this he emphasized that the United States needs to establish a strategy to utilize and coordinate domestic resources and take a global leadership role in the fight against a pandemic.

(Lena H. Sun, The Washington Post)

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

May 1, 2018 at 5:53 pm

Science Policy Around the Web – March 6, 2018

leave a comment »

By: Cindo O. Nicholson, Ph.D.

20180307_Linkpost

source: pixabay

Artificial Intelligence & Ethics

Artificial intelligence could identify gang crimes-and ignite an ethical firestorm

Today, many industries and our favorite gadgets use some form of artificial intelligence (AI) either to make better predictions on user/consumer behavior. AI is also being adopted by police departments to highlight areas where crime is likely to occur thus helping patrol officers prevent crimes before they occur (i.e. predictive policing). Recently at the Artificial Intelligence Ethics & Society (AIES) conference in New Orleans LA, researchers presented a new algorithm that could classify crimes as gang-crimes based on partial information. In particular, the new algorithm can identify gang crimes using only 4 pieces of information: the primary weapon used, the number of suspects, the neighborhood and location (street corner vs. alley for example) where the crime took place.

Many agree that the findings presented (published by the AIES) could change the way the police approach and respond to crimes by classifying the crime beforehand as gang-related. However, not all in attendance were convinced that the new algorithm would be any better than an officer’s intuition and experience. In fact, there were those who believed that there could be unintentional, negative consequences of relying on such an algorithm. A point of contention at the conference was the appearance that the research team did not give sufficient consideration to whether the training data was controlled for bias, or what would happen if individuals were misclassified as gang members.

AI is a powerful technology, and its use can be applied to solve problems in fields like ecology and conservation, public health, drug development, and others. However, like all powerful technology its development must keep pace with its regulation, and the consideration of its potential misuses and unintended consequences.

(Matthew Hutson, Science Magazine)

Science Education

Florida’s residents could soon get the power to alter science classes

The possibility that the public can make recommendations on what instructional materials are used in science classes is moving closer to reality. Two education bills are being considered by Florida’s legislature that would grant Florida’s residents the means to recommend what instructional materials are used in the classrooms of schools in their district.

The education bills being considered would add to a law enacted in June 2017 that grants Florida’s residents the right to challenge the topics educators teach students. In particular, the bills under consideration will allow Florida’s residents to review instructional materials used in class, and suggest changes to the materials. However, the final decision on whether recommendations from residents are accepted would still rest with the school board.

Among the concerns of the scientific community is that these laws would provide a mechanism for creationists, climate-change deniers, and flat-earth proponents (commonly referred to as “flat-earthers”) to insert their non-scientific viewpoints into scientific lesson plans. On the other hand, State Representatives in support of these bills contend that highlighting different viewpoints are important and would allow for debate and drawing one’s own conclusions.

While engaging the public on the content of educational curricula could have its merits, it could have negative consequences when public opinion overrides curricula that have been developed from knowledge gained and refined by rigorous, scientific interrogation over several decades. If more education bills that allow the public to challenge instructional materials are going to be approved, it will be imperative that individuals with scientific backgrounds be a voice of reason on school boards.

(Giorgia Guglielmi, Nature Magazine)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

March 7, 2018 at 11:28 am

Science Policy Around the Web – February 9, 2018

with one comment

By: Rachel Smallwood Shoukry, PhD

20180209_Linkpost

source: pixabay

Ethics

Big tobacco’s offer: $1 billion for research. Should scientists take it?

A controversial debate has arisen in recent years about whether scientists should accept funding from sources that have interests at odds with improving the human condition and promoting health. Specifically, should researchers accept research money from tobacco companies? This practice used to be generally accepted up until a couple of decades ago, but as the harmful effects of smoking have become more clear, as well as evidence of the tobacco industry’s attempts to cover-up and misdirect the public from becoming aware of those effects, the scientific community has become reluctant to partner with “big tobacco” and is more aware of conflicts of interest.

The tobacco company Philip Morris International (PMI), makers of Marlboro and other cigarette brands, is looking to invest in research of illegal cigarette trade and smuggling. It recently established a partnership with the University of Utrecht (UU) in the Netherlands to investigate this phenomena, but UU has now pulled out of the deal after a large amount of backlash. However, PMI is still looking to fund research on the tobacco industry, setting up the potential for more controversy.

There is additional concern about this possibility due to PMI’s funding of the Foundation for a Smoke-Free World. The foundation has stated that its goals are related to smoking cessation and preventing smoking deaths through several approaches. However, many fear that the foundation is simply a front for PMI to be able to distribute funds under a better-sounding name while continuing to fund research that can be presented in a misleading way to distract from legitimate health concerns. Several top institutions have denounced the Foundation for a Smoke-Free World for using PMI’s funds, and many have vowed that they will not seek grants from or collaborations with the foundation.

Proponents of allowing the funding via the tobacco industry are interested in research of cigarette alternatives aimed at harm reduction, arguing that little is known about their long-term health implications. They say there is little funding outside of the tobacco companies for these types of studies and don’t know where else to turn. They are also worried about the climate surrounding the topic, after the response UU received when accepting research dollars from PMI. But opponents do not believe that PMI and other companies are seeking harm reduction or to hide the truth about tobacco’s health effects through their research activities and marketing tactics. This ethical debate is sure to continue as PMI disclosed that it has had over 50 applications for funding.

(Martin Enserink, Science)

NSF

US science agency will require universities to report sexual harassment

The NSF has announced it will implement a new requirement that institutions receiving grants must report grant-funded investigators who have sexual or other types of harassment claims against them and whether they were put on leave pending investigation. Many are welcoming this step as movement toward a code of conduct that has been called-for in recent years. It is also coming on the heels of several research initiatives into sexual harassment in STEM fields and other organizations implementing policies to expose and prevent harassment. Although the #MeToo movement only brought sexual harassment claims to the forefront of our culture a few months ago, the STEM field had its own bombshell revelation followed by the unveiling of many stories of sexual harassment a couple of years ago when a renowned astronomer resigned after an investigation revealed years of sexual misconduct and harassment. This new policy is also likely related to the US Congress commissioning the Government Accountability Office to look into sexual harassment by individuals funded by federal scientific agencies.

The notice the NSF sent out also directs the recipient institutions to have clear policies on what constitutes harassment and what is appropriate behavior, as well as giving clear instructions to students and employees on how to report harassment. The institutions themselves will be responsible for conducting investigations and deciding repercussions. Until now the NSF has had an option to voluntarily report sexual misconduct of award recipients, but it was rarely used. The notice states that the NSF can remove the responsible personnel from the grant or even suspend or terminate the grant following the mishandling of a report.

Despite the general positive view of this attempt by the NSF to deter harassment and establish serious consequences, some have expressed concerns at the potential implications and logistics of implementation. It was suggested that this step may discourage universities from undertaking investigations of sexual harassment, since universities benefit from grant money and reputation just as the investigators do. Another aspect to consider is that universities have different policies on sexual harassment and misconduct, and what may be allowable at one institution may be a severe breach at another. It was not immediately clear from the notice how decisions will be made with regard to the grant following investigations. While perhaps not perfect, this policy by the NSF is a first step in the right direction to ensuring everyone can pursue their scientific endeavors in a harassment-free environment.

(Alexandra Witze, Nature)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

February 9, 2018 at 4:01 pm

Science Policy Around the Web – January 30, 2018

leave a comment »

By: Kelly Tomins, BSc

20180130_Linkpost

By RedCoat (Own work) [CC-BY-SA-2.5], via Wikimedia Commons

Cloning

Yes, They’ve Cloned Monkeys in China. That Doesn’t Mean You’re Next.

Primates were cloned for the first time with the births of two monkeys, Zhong Zhong and Hua Hua, at the Chinese Academy of Sciences in Shanghai. Despite being born from two separate mothers weeks apart, the two monkeys share the exact same DNA. They were cloned from cells of a single fetus, using a method called Somatic Cell Nuclear Transfer (SCNT), the same method used to clone over 20 other animal species, beginning with the now infamous sheep, Dolly.

The recently published study has excited scientists around the world, demonstrating the potential expanded use of primates in biomedical research. The impact of cloned monkeys could be tremendous, providing scientists a model more like humans to understand genetic disorders. Gene editing of the monkey embryos was also possible, indicating scientists could alter genes suspected to cause certain genetic disorders. These monkeys could then be used a model to understand the disease pathology and test innovative treatments, eliminating the differences that can arise from even the smallest natural genetic variation that exists between the individuals of the same species.

Despite the excitement over the first cloning of a primate, there is much work to be done before this technique could broadly impact research. The efficiency of the procedure was limited, with only 2 live births resulting from 149 early embryos created by the lab. In addition, the lab could only produce clones from fetal cells. Now it is still not possible to clone a primate after birth. In addition, the future of primate research is uncertain in the United States. Research regarding the sociality, intelligence, and DNA similarity of primates to humans has raised ethical concerns regarding their use in research. The US has banned the use of chimpanzees in research, and the NIH is currently in the process of retiring all of its’ chimps to sanctuaries. In addition, there are concerns regarding the proper treatment of many primates in research studies. The FDA recently ended a nicotine study and had to create a new council to oversee animal research after four squirrel monkeys died under suspicious circumstances. With further optimization, it will be fascinating to see if this primate cloning method will expand the otherwise waning use of primate research in the United States.

The successful cloning of a primate has additionally increased ethical concerns over the possibility of cloning humans. In addition to the many safety concerns, several bioethicists agree that human cloning would demean a human’s identity and should not be attempted. Either way, Dr. Shoukrat Mitalipov, director of the Center for Embryonic Cell and Gene Therapy at the Oregon Health & Science University stated that the methods used in this paper would likely not work on humans anyways.

(Gina Kolata, New York Times)

Air Pollution

EPA ends clean air policy opposed by fossil fuel interests

The EPA is ending the “once-in always-in” policy, which regulated how emissions standards differ between various sources of hazardous pollutants. This policy regards section 112 of the Clean Air Act, which regards regulation of sources of air pollutants such as benzene, hexane, and DDE. “Major sources” of pollutants are defined as those that have the potential to emit 10 tons per year of one pollutant or 25 tons of a combination of air pollutants. “Area Sources” are stationary sources of air pollutants that are not major sources. Under the policy, once a source is classified as a major source, it is permanently subject to stricter pollutant control standards, even if emitted pollutants fall below the threshold. This policy was intended to ensure that reductions in emissions continue over time.

The change in policy means that major sources of pollution that dip below the emissions threshold will be reclassified as an area source, and thus be held to lower air safety standards. Fossil fuel companies have petitioned for this change for years, and the recent policy change is being lauded by Republicans and states with high gas and coal production. The EPA news release states that the outdated policy disincentives companies from voluntarily reducing emissions, since they will be held accountable to major source standards regardless of the amount of emissions. Bill Wehrum, a former lawyer representing fossil fuel companies and current Assistant Administrator of EPA’s Office of Air and Radiation, stated reversing this policy “will reduce regulatory burden for industries and the states”. In contrast, environmentalists believe this change will drastically increase the amount of pollution plants will expel due to the softening of standards once they reach a certain threshold. As long as sources remain just below the major source threshold, there will be no incentive or regulations for them to lower pollutant emissions.

(Michael Biesecker, Associated Press)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

January 30, 2018 at 3:30 pm