Science Policy For All

Because science policy affects everyone.

Archive for July 2018

Science Policy Around the Web – July 31, 2018

leave a comment »

By: Patrice J. Persad, PhD

20180731_Linkpost

source: pixabay

Science and Society

The ethics of computer science: this researcher has a controversial proposal

As a computer scientist with good intentions, it is only natural for him/her to be optimistic about the societal implications of his/her discoveries or findings. Unfortunately, this naivety, or lack of foresight, regarding secondary uses and repercussions of computer applications in/on everyday life can be damaging. As illustrations of unpremeditated consequences, automated tasks based on machine learning algorithms may be time efficient but steal jobs from millions of workers. Also, seemingly unlimited data storage capabilities and potent graphical processing unit (GPU) processing permit building prediction models of consumers’ behavior. This unrestricted data access and use can infringe on individuals’ privacy and question the voluntary nature of the consent process.

In order to magnify the importance of all computer applications’—notably, artificial intelligence’s (AI’s)—shortcomings in relation to society, Dr. Brent Hecht of Northwestern University has a plan. Instead of lauding their findings’ positive influences on society, computer science researchers must disclose negative implications of their research in publications and other press-related media.

The Future of Computing Academy (FCA), which Hecht oversees and which is a branch of the Association for Computing Machinery (ACM), promotes this duty of negative impact disclosure during the peer review process. Motivation for such a proposal stems from fostering accountability of researchers to the general public; this emphasizes the computer scientist’s role not as a mindless mass producer but as a mindful protector of the public’s welfare. Acknowledging the cons of works/applications pushes discussing plus implementing solutions. This deepening of accountability also revitalizes the public’s trust in the computer science community. As expressed by Hecht, here is what fellow computer scientists, as authors and peer reviewers, can do right now to contribute to these efforts of recognizing negative societal impacts:

  1. As an author, include a section entitled “Broader Impacts” or “Societal Impacts,” which discloses negative impacts in addition to positive impacts. Readers are not expecting the authors to be seers; in the context of pre-existing literature, discussing secondary uses with possible dastardly effects on citizens should be a start (if not sufficient).
  2. As a peer reviewer, outright ask, if unlisted in the submission, “What are the work’s negative societal impacts?” Stress that disclosing such information will not warrant rejection of the manuscript. (On the other hand, if negative impacts outweigh positive ones, funding agencies can use their discretion in supporting projects.)
  3. When communicating with the press, remember to mention negative societal impacts, and be prepared to address relevant questions/comments.

(Elizabeth Gibney, Nature)

Bioethics

Did a study of Indonesian people who spend most of their days under water violate ethical rules?

At the heart of any study involving human subjects, the potential for an ethical dilemma to arise is strong in the face of unclear and/or inaccessible research policies and regulations. Or, to put it bluntly, there churns the following question that torments the researcher when ethical matters cross over into legal waters: “Will I go to jail if I unknowingly breach research protocol (no matter if that protocol is under debate or revision)?” The ethical dilemma is imminent especially when principal investigators are foreign and from developed countries, but the proposed study’s focus is on indigenous populations in developing nations. Consider the research presented in the April 2018 Cell article “Physiological and Genetic Adaptions to Diving in Sea Nomads” by Dr. Melissa A. Ilardo and colleagues. The investigation’s results demonstrated that genetic variation in PDE10A is associated with a larger spleen size in the Bajau people, Indonesian “Sea Nomads” who have practiced extreme breath-hold diving for over a thousand years. The Ministry of Research, Technology and Higher Education (RISTEK) in Indonesia granted the team a permit to pursue the study. However, the bona fide ethical conflict stems from:

  1. local organizations’ claims that the team did not receive approval from at least one Indonesian research ethics commission/committee (see Council for Internal Organizations of Medical Science, CIOMS, guidelines).
  2. failure to procure approval from the Indonesian National Institute of Health Research and Development to transport human DNA samples out of Indonesia.
  3. lack of research involvement on the part of Indonesian scientists, especially geneticists.
  4. inadequate presentation of overall research results to study populations, including the Bajau, before publication.

In defense of Ilardo and colleagues, supporters point out that the Indonesian government has not reprimanded any team members for their research indiscretions, and Cell finds no issues with the group’s provided documents from said government. As for engaging more with Indonesian scientists regarding local research projects, Ilardo’s unanswered e-mails to several local professionals prior to data and specimen collection are proof of involvement attempted. In hindsight (or perhaps coincidence), RISTEK in early July organized an online portal where foreign researchers can easily gain access to all protocol/documentation for permits.

Foreign researchers are urged to realize that these presented ethical concerns—among them, governmental/national organizations’ approval, or consent, and transfer of biological specimens out of developing countries—are not trivial. Scientists should not be alarmed at just the prospects of jail time. Research cooperation with other nations’ institutions/entities can impact international relations between nations and local denizens’ trust in foreign researchers. Both international relations and trust influence the success of future research endeavors in developing and other nations.

(Dyna Rochmyaningsih, Science)

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

July 31, 2018 at 4:56 pm

Science Policy Around the Web – July 27, 2018

leave a comment »

By: Emily Petrus, Ph.D.

20180727_Linkpost

source: pixabay

Innovation

Artificial Intelligence Has a Bias Problem, and It’s Our Fault

While computer and data scientists are working to create systems which can reason and perform complex analysis, groups of ethicists, lawyers and human rights advocates express increasing concerns about the impact artificial intelligence will have on life. It is becoming apparent that human bias regarding race, gender and socioeconomic position also influence algorithms and data sets used to train machine learning software.

Most artificial intelligence (AI) systems are trained on data sets culled from the internet. This results in skewed data which over-represents images and language from the United States. For example, a white woman in a white dress results in algorithms labeling a picture as “bride” or “wedding”, while an image of a North Indian bride is labeled as “performance art”. If that seems like a harmless hiccup, think about algorithms designed to detect skin cancer from images. A recently published study did a decent job detecting dark moles on light skin, but only 5% of the data set depicted dark skinned people, and the algorithm wasn’t even tested on that data set. This bias could skew accurate diagnoses for already underserved minority populations in the United States. Finally, AI will have a huge impact on financial markets beyond the replacement of humans to do jobs, particularly in manufacturing. Decisions on loan eligibility and job candidate hiring decisions are being filtered through AI technology, which is guided by data which may be biased.

It is apparent that computer scientists must make concerted efforts to un-bias data training sets and increase transparency when they develop new AI systems. Unfortunately, these common-sense suggestions are just that: suggestions. Before Obama left office in Fall 2016, a roadmap was created by the administration to guide research and development of AI systems. There’s no teeth in policy dictating fairness and inclusivity in AI development, but private and academic institutions are making gains in this arena. The Human-Centered AI project at Stanford University and Fairness, Accountability, Transparency, and Ethics (FATE) in AI research group at Microsoft are two examples of these types of efforts. Both groups seek to increase inclusivity in AI algorithms and reduce bias – human and computer generated. AI can also be trained to detect biases in both training data and the models by conducting an AI audit. An effort of developers in academia and private industry will be necessary to produce and prove their AI is unbiased, and it is unlikely that federal regulations would have the power or dexterity to administer any concrete regulations regarding this technology. Like most other scientific advances which bring significant monetary gains, the pace is breakneck but corners should not be cut. Legislation is unlikely to be able to keep up with the technology, but incentives to keep the playing field fair should come from within the AI community itself.

(Ben Dickson, PC Mag)

Scientific oversight

NIH delays controversial clinical trials policy for some studies

How does the brain process images of faces? How do we respond to frustrating situations? What does the mind of a sociopath look like in an MRI? These are all basic science questions in brain research which may discover treatment options for future studies. But for the moment, no drugs or interventions are being tested in many basic research labs funded by the National Institutes of Health (NIH). This means they’re not clinical interventions, or by definition, clinical trials, right? Maybe…

Basic researchers studying the healthy human brain sigh a breath of relief as the NIH decided to delay new rules applying to the classification of human trials. At issue is the re-classification of research which can be considered a clinical trial. The intent of the new guidelines was to increase reproducibility and transparency in government funded human research, for example requiring more rigorous statistical practices.  In practice, investigators will be required to upload their studies to clinicaltrials.gov, take mandatory trainings, and produce significantly more paperwork to continue receiving funding for their basic research. In addition, researchers were concerned that this would create more confusion in the public, as their research would be inaccurately represented as a clinical trial.

After the announcement last year, professional societies and academics sent letters of complaint to NIH, prompting congress to delay the implementation of the requirements to September 2019. This delay also gives leniency to basic researchers who apply to funding opportunity announcements seeking studies labeled as clinical trials, meaning they would not be immediately disqualified from being scored. Although many researchers hoped the NIH would drop all requirements for basic research, the delay is welcome for now. “This delay is progress because it gives them more time to get it right, and in the interim people aren’t going to be in trouble if they get it wrong,” said Jeremy Wolfe, a cognitive psychologist at Harvard Medical School.

(Jocelyn Kaiser, Science)

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

July 27, 2018 at 4:51 pm

Posted in Linkposts

Tagged with , , , ,

Science Policy Around the Web – July 24, 2018

leave a comment »

By: Janani Prabhakar, Ph.D.

20180724_Linkpost

source: pixabay

The Scientific Workforce

Has the tide turned towards responsible metrics in research?

Quantifying progress and success in science has long been a challenging task. What is the correct metric to use? Mathematics can provide insight on the general impact of journals or calculate the productivity based on publication rate of an individual researcher. But, predictive statistics are much less common. Predictive statistics and machine learning approaches are used in other industries and sectors quite often. For example, as this article points out, predictive statistics and modeling are used in baseball to identify new talent. Why not in academia? There are private companies that provide such statistics like Academic Analytics. They offer a version of multiple existing metrics to measure potential success, including citations, H-indices, impact factors, and grant income. The desire for such statistics come from those making hiring decisions within academia to policymakers when making budgetary decisions. The need for quantifying potential success is apparent, but when, where, and how is still in hot debate, as well as how to do so ethically. The San Francisco Declaration on Research Assessment (Dora) called for an end to using journal impact factors in funding and hiring decisions, and focus on other metrics instead. UK has made some strides to ensure that any metrics used still cohere to the principles of science and reflect ‘responsible metrics.’ These metrics include robustness, humility, transparency, diversity, and reflexivity. A recent report evaluates the success of these metrics and implementations over the last five years. Out of 96 UK universities and research organizations, 21 have already agreed to follow these metrics. Some universities have begun to implement their own policies beyond those outlined in Dora. This has led to increasing data on good metrics and practices for Universities to begin to shift policy to improve the academic environment, reduce abuse, and to employ responsible management practices. This data is a great resource for making change in Universities at the global level.

(James Wilsdon, The Guardian)

Psychology

Confronting Implicit Bias in the New York Police Department

After a long history of police brutality towards black men, the role of racial bias has fallen front and center in the American dialogue. Implicit bias reflects the kinds of biases that are unintentional, unconscious, and more pervasive than racial bias alone. Erasing such biases requires overcoming one’s own stereotypes and using facts to make rational decisions. As part of Mayor Bill de Blasio’s police reform efforts, a training program on implicit bias will run through next year, conducted by Fair and Impartial Policing, a Florida company that provides such training programs for many police departments. While this program will cost the city $4.5 million, there is no data, yet, to assess the training’s effectiveness. The lack of objective data is troubling to policy makers and researchers, given the spread of this training across many police departments. Dr. Patricia G. Devine, a professor at the University of Wisconsin, has stated that we need to first know more about officers’ unintentional biases to determine whether the training has a significant effect. Furthermore, the longevity of the training effects need to be determined both in terms of changes in officer behaviors as well as the extent to which the community has benefitted. Despite the lack of such data, feedback from trainers suggests that over the course of the training period, initial hesitance in police officers turns to a better appreciation for the role of stereotypes in action selection. For many police officers, the training is an opportunity to reflect upon their own behaviors and make meaning out of them from the perspective of their own tendencies to racially stereotype. The training isn’t meant to cure officers of their biases, but rather to help them confront and manage their own biases. Police officers are shown case studies of situations where biases result in differences in the way police officers confront white versus black individuals, allowing them to appreciate the real-world consequences of implicit biases. Police officers are then taught strategies to reduce and manage their biases, and to recognize biases in others. Part of the process is to also help police officers make “unhurried decisions” so they have time to think, strategize, and make appropriate choices. Without metrics, the long-term viability may be questioned, but from the perspective of many participants, it is a big step in the right direction as it acknowledges underlying prejudices that may not have otherwise been realized.

(Al Baker, The New York Times)

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

July 25, 2018 at 5:23 pm

Science Policy Around the Web – July 20, 2018

leave a comment »

By: Mohor Sengupta, PhD

Skeleton Tools Medicine Technology X-ray Medical

source: Max Pixel

Innovation

3-D Color X-Rays Could Help Spot Deadly Disease Without Surgery

Traditional X-ray or CT (computerized tomography) scanners pass X-ray beams through the body and detect the transmitted radiation. Dense tissues, that have absorbed the X-ray, appear white and softer tissues, which have transmitted most of the X-ray beam, appear darker. Dr. Anthony Butler from University of Otago in NZ, along with his father Phil Butler, has made a crucial breakthrough in this imaging technique. They have applied the principle of a pixel detecting tool that is used in the Large Hadron Collider at CERN, in modeling their scanner. Their tool basically records the change in the X-ray wavelength once it passes through sub-atomic particles and assigns the new wavelength a pixel (smallest individual unit of a digital image) of a certain color. This colored pixel identifies the particle the X-ray beam has passed through. For example, if the X-ray passes bone, the calcium atoms in the bone will alter its wavelength, which will then be recorded as certain color, say pink, which is different from the color assigned to the altered wavelength of the X-ray beam if it passed another tissue. The tool then translates this data into a 3D color image.

This imaging technique can provide very high-resolution images of tissues without any invasive procedures. Its developers have detected minute details of various tissues like cartilage, bones, adipose tissue etc. in scanned ankles and wrists. They plan to scan the entire human body eventually. Since this imager can take pictures of areas deep inside the body, it will hopefully uncomplicate the diagnoses of many hard-to-detect medical issues, like cancer, heart abnormalities and blood disorders. “It’s about being able to first find the explanation for somebody’s symptoms, like a tumor, and then find the best way to reach it with the least amount of detours and misadventures,” said Dr. Gary E. Friedlaender, an orthopedic surgeon at Yale University.

Aurélie Pezous is a knowledge transfer officer at CERN. She promotes outside uses of research techniques developed by the organization. Of the recent applications of the pixel detecting tool in medicine, she said, “This is the beauty of it: Technology that was first intended for the field of high-energy physics is being used to improve society. It’s very exciting for CERN”.

In the coming months, clinical trials will enroll orthopedic and rheumatology patients to test the novelties of the 3D color X-ray scanner.

(Emily Baumgaertner, New York Times)

Drug pricing

Trump administration to explore allowing drug imports to counter price hikes

In February last year Bernie Sanders, along with many of his democratic colleagues introduced a legislation in the House and Senate to allow drug importation from Canada, to rein in rising drug prices in USA. Drugs are cheaper in many countries because of government regulations on pricing. Since that legislation, this idea has been championed by Sanders and Trump alike.

Steep and regular drug price hikes pursue American consumers constantly. This is particularly true for an off-patent drug produced by a single manufacturer. The case of Martin Shkreli, who became infamous for having hiked the price of Daraprim, an AIDS drug, to 5000 percent of its original price after his company Turing Pharmaceuticals acquired its manufacturing rights in 2015, was cited as an example of blatant abuse of the current system. It has been suggested by Alex Azar, secretary of Health and Human Services, that in such situations an effective solution could be to import drugs from a reliable foreign source and effectively introduce competition in the local manufacturing arena, and curb prices within the United States in the process. As the federal law on drug importation currently stands, it is illegal to import foreign approved medicines except for meeting shortages in supply, something that happened after the hurricanes in Puerto Rico last year.  FDA commissioner Scott Gottlieb has likened the situation of steep price hikes to that of drug shortages, as it creates similar public health consequences for consumers and he believes that a temporary importation of foreign approved drugs could be helpful, at least until competition resumes and prices are brought down.

Yesterday, Gottlieb criticized makers of high priced medicines for stalling manufacture and availability of alternative low-priced versions of the same compounds. The focus of the federal government here is the importation of medically necessary drugs approved in other countries as a reasonable substitute for the FDA approved version in USA. If the import legalization comes about, it will be a major disappointment for pharmaceutical companies at home which have strongly opposed the move, along with the Republicans.

Shortly after the legislation introduced by Sanders in 2017, four former commissioners of the FDA issued an open letter to members of the Congress, citing the dangers of exposing Americans to imported drugs that have not undergone the established standards of scrutiny that the FDA has in place for American-made drugs. Irrespective of the criticisms, the proposed importation of foreign approved drugs seems to be encouraged by the federal government, which is good news for many consumers and a blow to the monopoly of local drug-makers.

(Laurie McGinley, The Washington Post)

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

July 20, 2018 at 4:01 pm

Science Policy Around the Web – July 17, 2018

leave a comment »

By: Saurav Seshadri, PhD

20180717_Linkpost

source: pixabay

Animal cruelty

New digital chemical screening tool could help eliminate animal testing

An algorithm trained to predict chemical safety could spare over 2 million laboratory animals per year from being used in toxicological screening.  Researchers at the Johns Hopkins University Center for Alternatives to Animal Testing (CAAT) report that their model reproduced published findings at a rate of almost 90% – higher, on average, than replication studies done in actual animals.  The model pools information from various public databases (including PubChem and the European Chemicals Agency or ECHA) to extract ~800,000 structural properties from ~80,000 previously tested chemicals, which can be used to assign a chemical similarity profile to a new compound.  The model then uses a supervised learning approach, based on previous results, to determine what toxicological effects would likely be associated with that compound’s profile.

The principle employed by the tool, i.e. predicting the toxicity of unknown compounds based on structural similarity to known compounds, is called read-across and is not new.  It was a core goal of REACH (Registration, Evaluation, Authorisation and Restriction of Chemicals), an effort by the ECHA in 2007 to gather toxicological information about all chemicals marketed in the EU.  However, limited data (only ~20,000 compounds) and legal hurdles (ECHA claimed that its data was proprietary and prevented an earlier version of the algorithm from being released) delayed the publication of a machine learning-empowered read-across tool until now.  As it stands, the prospective availability of the tool is unclear: the authors intend to commercialize it through a company called Underwriters Laboratories, but claim to have shared it with other academic and government laboratories.

Another key question is how the tool will be received by regulatory agencies, particularly the Environmental Protection Agency (EPA).  In 2016, Congress passed legislation mandating thorough and transparent evaluation of chemical safety, and encouraging reduced animal testing.  To these ends, US agencies are already developing databases (such as ToxCast) and predictive modeling tools (e.g. this workshop organized by the Interagency Coordinating Committee on the Validation of Alternative Methods) – whether this model could be integrated with ongoing efforts remains to be seen.  While the authors point out that complex endpoints like cancer are still beyond the scope of the tool and will require in vivo testing, subjecting animals to tests for eye irritation and inhalation toxicity may soon, thankfully, be a thing of the past.

(Vanessa Zainzinger, Science)

 

Gene editing

Controversial CRISPR ‘gene drives’ tested in mammals for the first time

A potentially transformative application of the CRISPR gene editing technology has been given a dose of reality by a recent study.  A gene drive is an engineered DNA sequence encoding a mutated gene as well as the CRISPR machinery required to copy it to the animal’s other chromosome, thereby allowing the mutation to circumvent normal inheritance and spread exponentially in a population.  Researchers quickly recognized the potential of this approach to control problematic populations, such as malaria-carrying mosquitoes, and proof-of-concept studies in insects have been successful.  However, until gene drives could be proven effective in mammals, their applicability to invasive rodent populations has been unclear.

Now, researchers at UCSD have shown that the approach can work in mice, but with possibly insurmountable caveats.  In order to achieve efficient copying of the mutated gene, the team specifically turned on the DNA-cutting enzyme Cas9 during meiosis, when dividing sperm and egg cells are biased towards using gene insertion to repair DNA breaks.  Using this approach, they were able to boost inheritance of a mutation that produces a white coat in normally dark mice, from 50% to 73% of offspring.  However, due to differences in the mechanisms of sperm and egg production between mice and insects, the effect was only seen in females; even in these, differences in the timing of Cas9 activity between different strains of mice led to inconsistent phenotypes in the offspring (i.e., grey coats).  Ultimately, the low efficiency observed precludes any realistic application to population control in the wild.

This cautious result may be welcomed by opponents of gene drive technology: environmental activists, fearing the uncontrollable effects of an accidental release, had called for a moratorium on gene drive research at the UN Convention on Biodiversity in 2016.  However, this call was rejected, and groups such as GBIRd (Genetic Biocontrol of Invasive Rodents) are dedicated to finding responsible ways forward.  One example is to restrict use of modified mice to islands, which may be too populated for large-scale pesticide use but still geographically self-contained.  Another compromise, suggested by the authors, is to use the method to speed up production of polygenic disease model mice.  Overall, like CRISPR in general, it appears that population-level gene editing will need substantially more research before it can realize its dramatic potential.

(Ewen Callaway, Nature)

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

July 17, 2018 at 5:09 pm

Science Policy Around the Web – July 10, 2018

leave a comment »

By: Liu-Ya Tang, PhD

20180710_Linkpost

source: pixabay

Public health

Chronic pain patients, overlooked in opioid crisis, getting new attention from top at FDA

The opioid crisis has become a big problem in the United States, with more than 40,000 deaths from opioid overdoses a year. To resolve this issue, opioid production was reduced by 25 percent in 2016 and by an additional 20 percent in 2017 aiming to prevent the over-prescription of opioids. However, cancer patients who need it for pain control may suffer due to the shortage of this medication. Moreover, cancer patients face stigma for using opioids as a pain reliever from both health care providers and society in general.

Sara Ray and Kathleen Hoffman, who work for a health-oriented social network called Inspire, reviewed 140 public posts, written by cancer patients and their caregivers. These people expressed problems with getting the medications they need. One prominent issue is that some doctors are reluctant to prescribe opioids due to the possibility of addiction, which has made cancer patients feel like drug seekers. There is also an overwhelming amount of information about addiction and drug dependence from the media, the government, and even health care providers, which can cause confusion and misunderstanding among cancer patients and their caregivers. As a result, some patients would rather tolerate the pain than risk addiction. Some patients commented that not all oncologists are knowledgeable about treating cancer-related pain, and that cancer patients should seek help from health care providers who specialize in pain management.

Ensuring that every cancer patient has equal access to such health care, pain management awareness, as well as increased availability of pain medication are important. Both health care providers and patients need better education on pain management, which could help counteract the current stigma and remove the barriers on legitimate use of opioids for cancer patients.

(Jayne O’Donnell and Josephine Chu, USA Today)

Research progress

There’s no limit to longevity, says study that revives human lifespan debate

The average human life span has increased steadily over the past 100 years. Life expectancy has more than doubled, from about 25 years to about 65 for men and 70 for women. This sparks the question: is there a limit to the length of human life? A recent study, published in Science, reported that there may be no limit to how long humans can live.

A research team, led by Sapienza University demographer Elisabetta Barbi, and University of Roma Tre statistician Francesco Lagona, did a statistical analysis on the survival probabilities of nearly 4,000 ‘super-elderly’ people in Italy, all aged 105 and older. They found that the risk of death seems to increase as people age before reaching age 105, while they found that the death risk flattens out after age 105, which might suggest that there is no limit to human longevity. The concept of a mortality plateau is not new, as it has been mentioned in previous studies. However, compared to previous claims, this study has a more rigid data collection process and better statistical methods.

Despite this, there are many different voices regarding this finding. Since this statistical analysis was only done in the Italian population, Jean-Marie Robine, a demographer at the French Institute of Health and Medical Research in Montpellier, suggested doing a global analysis. He notes that unpublished data from France, Japan, and Canada suggests that evidence for a mortality plateau is “not as clear cut”. Some experts question the conclusion based on biological facts of human body. Jay Olshansky, a bio-demographer at the University of Illinois at Chicago, said that some types of cells, such as neurons, cannot replicate and have their “length of life”, and irreversible cell death that comes with aging places “upper boundaries on humans”.

Though there are differing opinions on whether there is a limit to the human life span, many researchers hope to better understand the mechanisms of the mortality plateau and the process of aging, to facilitate developing interventions that slow aging.

(Elie Dolgin, Nature news)

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

July 17, 2018 at 4:54 pm

Science Policy Around the Web – July 6, 2018

leave a comment »

By: Kelly Tomins, BSc

20180706_Linkpost

source: pixabay

Genetic privacy

Could DNA Testing Reunite Immigrant Families? Get the Facts.

Since the enactment of the Trump administration’s “zero tolerance” immigration policy, over 2300 children have been separated from their families at the border. The policy caused widespread outrage throughout the US, and over 400,000 people protested the policy at the “Families Belong Together” march last week. Although the policy has since been redacted, the government has shown little transparency on how they plan to reunite families. Could DNA testing be a solution?

DNA testing companies, MyHeritage and 23andMe, seem to think so. They have offered thousands of testing kits to help reunite migrant children to their families. Scientifically, these tests are very reliable, and can detect direct relations by 99.9% accuracy. However, the science is the least complicated aspect of this situation.

Consent and privacy are several of the most troubling aspects of the use of these tests. Due to medical privacy rules, children would need a designated legal guardian or representative to have their DNA tested, which is clearly a problem. In addition, adults likely cannot give informed consent, especially since they are in distressing conditions and many do not speak English. Migrants may feel pressured to have the sequencing done if they believe it is the only way to be reunited with their children. DNA sequencing reveals private information about health and paternity, and sequencing data stored in databases has been used to genetically track criminals. It is difficult to imagine that detainees would be given enough information about DNA sequencing and its’ implications to make an informed decision.

Despite these concerns, according to an unnamed federal official, DNA testing has already begun. Jennifer K. Falcon, communications director for RAICES, a nonprofit in Texas that offers free and low-cost legal services to immigrants and refugees, is extremely against DNA testing in this context. In addition to her concerns regarding consent, she argues that the government will have access to extremely personal data that could be used for future surveillance. Although 23andMe and MyHeritage have assured that the genetic data will only be used for reunification, it is unclear what will happen to the DNA samples and data afterwards.

Beyond the ethical and logistical hurdles in this case, DNA sequencing is not a quick fix. 23andMe state on their website that sample processing takes 6-8 weeks. It would also be a logistical nightmare to obtain and match DNA samples from all the detainees currently in custody, especially when matching results from two different genetic testing companies. Critics point out that registering the identity and locations of migrant parents and children would have circumvented the need for such invasive testing. Although genetic tests are cheaper and more accessible than ever, they require unique consideration to address issues of privacy and consent.

(Maya Wei-Haas, National Geographic)

Endangered species

Rhino Embryos Made in Lab to Save Nearly Extinct Subspecies

Thousands of northern white rhinos once inhabited the grasslands of east and central Africa, but habitat loss and poaching led to the population’s swift demise. All hope for the survival of the rhino subspecies seemed lost when the its’ last remaining male, Sudan, died earlier this year.  There are now only two surviving individuals of the subspecies, a mother-daughter pair named Najin and Fatu, both of whom are infertile. Remarkably, a new breakthrough in reproductive technology has reignited the possibility of saving this subspecies.

In a recent study published in Nature Communications, Dr. Thomas Hildebrant, a wildlife reproductive biologist, and his team show for the first time that rhino embryos can be created using in vitro fertilization (IVF). Although there are no remaining living males of the subspecies, there are four samples of frozen sperm that could potentially be used for reproduction. The research group created four hybrid embryos by combining frozen northern white rhino sperm and eggs from southern white rhinos. The scientists plan on implanting these hybrid embryos into surrogates, to see if they survive to birth. If that is successful, the scientists aim to extract eggs from the remaining female northern white rhinos and create pure-blood northern white rhinos in the lab.

Since there is a limited supply of northern white rhino gametes (only four sperm samples and two egg samples), Hildebrant and his team are also pursuing a technology called induced pluripotent stem cells (iPSC). iPSC are a type of stem cell that can be created from adult cells, such as skin or blood. These iPSC can then be reprogrammed into various cell types. iPSC have already been created from northern white rhinos, and scientists are now figuring out how to convert them to sperm and eggs. Since the San Diego zoo has skin cells from 12 northern white rhinos, the future conversion of these cells into gametes could provide more genetic diversity to any future population.

While many conservation scientists applaud the use of technology to save the subspecies, many wonder whether the resources should rather be spent protecting habitats for remaining rhinos on-the-ground. In a study in Nature Ecology and Evolution, scientists show that de-extinction efforts can lead to a net biodiversity loss, since resources could be spent on endangered species. As Dr. Bennett, a conservation scientist at Carleton University, puts it “if the person is couching de-extinction in terms of conservation, then she or he needs to have a very sober look at what one could do with those millions of dollars with living species — there’s already plenty to do.”

(Steph Yin, New York Times)

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

July 6, 2018 at 3:11 pm