Science Policy For All

Because science policy affects everyone.

Posts Tagged ‘23AndMe

Mapping the Human Exposome: Understanding the “E” after the “G”

leave a comment »

By: Bryan Bassig, Ph.D.


Source: Pixabay


Current efforts to maximize our understanding of the known interplay between genetic and environmental factors in disease etiology have the potential to inform future research priorities and disease management and prevention

Defining the concept of the ‘exposome’

It is now commonly accepted that the etiology of most chronic diseases is a combination of genetics and environmental exposures, and most likely the interaction of these factors (“G” x “E”). The breadth of environmental exposures that have been implicated in chronic disease risk is substantial and includes personally modifiable factors including smoking and dietary choices as well as exposures that likely require policy interventions on a more universal scale, such as reducing air pollution. Substantial investments to map and characterize the human genome have led to an explosion of population-based studies that seek to understand the specific genetic variants that are associated with a wide variety of disease phenotypes. This in turn has generated great enthusiasm in applying these identified variants to personalized disease risk management and treatment. Whereas current discussion of the role of genetics in population-based health has already begun to move from discovery to translation with ongoing personalized medicine initiatives, our understanding of how to comprehensively measure the totality of environmental factors (broadly defined as non-genetic factors) that shape disease risk at a population-based level has generally lagged behind that of genetics.

Given the interplay and contributions of both “G” and “E” in disease processes, research and financial investments in one component but not the other likely lead to less efficiency in capturing the interindividual variation that exists in disease etiology and treatment and survival. An increasing recognition of this point over the last decade has propagated several research initiatives aimed at greater understanding of environmental factors in disease etiology, including efforts to understand the human “exposome.” Investment in these initiatives from a scientific funding standpoint has the potential to significantly improve exposure science and may in theory inform population-based health research strategies.

The concept of the human exposome was first conceived by epidemiologist Dr. Christopher Wild, a former director of the International Agency for Research on Cancer, in 2005. The concept has since gained traction within the research community. The idea behind the exposome is to complement the advances that have been made in understanding the human genome by characterizing the full spectrum of environmental exposures that occur from conception to death with an understanding that these exposures are both dynamic in nature and broad in scope. Indeed, a full “mapping” of the exposome as originally conceived by Dr. Wild and subsequently by others would include an ability to measure all internal (e.g. endogenous hormones and metabolites) factors as well as exogenous exposures that are either specific to the individual (e.g. smoking/alcohol, diet) or more universal in nature (e.g. built environment, climate). These exposures would be captured or measured at various “snap shots” throughout life, ideally corresponding to key time points of biological development such as in utero, childhood, and early adulthood. In contrast to traditional exposure assessment in population-based studies, which rely on questionnaires or targeted biological measurements of a limited number of chemicals that have been selected a priori, innovative technologies that take an agnostic and more comprehensive approach to measuring internal biomarkers (e.g. “omics”) or lifestyle-related factors (e.g. using smart phones to log physical activity patterns) would be needed for full characterization. Ideally, this would represent the “cumulative level of all cumulative exposures” in the human body.

Implementation: Progress, Potential, and Challenges

Implementation of the exposome paradigm is still in its relative infancy and current discussions are primarily focused on the scope of the initiative that is achievable within the parameters of scientific funding and infrastructure. For instance, in the absence of large prospective cohort studies that include collection of repeated samples or exposure information from people over multiple timepoints, application of the exposome paradigm is still possible but may be limited to fully characterizing the internal and external environment using samples or measurements taken at a single timepoint. While the current focus is on scientific implementation of this paradigm, the potential long-term translatable implications of exposome research can be imagined. From the standpoint of environmental regulation, agencies that conduct risk assessments of environmental exposures evaluate a series of questions including the dose-response relationship of these exposures with biologic effects or disease risk, and whether certain factors like age at exposure influence susceptibility. Application of the exposome framework provides a mechanism to potentially better characterize these questions as well as to evaluate multiple exposures or “mixtures” when making regulatory decisions. This potential however would need to be balanced in view of the existing regulatory framework and the need to develop guidelines for interpreting the expansive and complex datasets.   

While any application of the exposome paradigm to public health or clinical utilization would be an incredibly daunting challenge, a 2012 study published in Cell described this theoretical potential. The case study presented findings from a multiomic analysis of a single individual over 14-months in which distinct biologic changes and omic profiles were observed during periods when the individual was healthy relative to periods when he developed viral illness and type 2 diabetes. The authors concluded that the observed profiles were a proof of principle that an integrative personal omics profile could potentially be used in the future for early diagnostics and monitoring of disease states. While the study did not integrate data on external environmental exposures, further incorporation of these factors into the omic framework may provide evidence of distinct signatures that differ according to exposure status.

Current efforts to advance the exposome field have been bolstered by several initiatives including a 2012 report by the National Academies that described the future vision and strategy of exposure science in the 21st Century. Exposome-related research is also a major goal of the 2018-2023 strategic plan offered by the National Institute of Environmental Health Science (NIEHS), and the agency has supported two exciting exposome research initiatives. These include the HERCULES (Health and Exposome Research Center: Understanding Lifetime Exposures Center) research center at Emory University that is on the front lines of developing new technologies for evaluating the exposome, and the Children’s Health Exposure Analysis Resource (CHEAR) to encourage the use of biological assays in NIH-funded studies of children’s health.

As the field of exposomics matures, there will undoubtedly be several issues that arise that intersect both scientific and policy-related considerations as described by Dr. Wild and others involved in this field. These include but are not limited to:

  1. a) Cross-discipline education and training opportunities: The exposome paradigm encompasses multiple scientific disciplines, including laboratory sciences, bioinformatics, toxicology, and public health. Given the traditional model of graduate programs in science, which generally focus on distinct subfields, new educational and/or training programs that provide cross-disciplinary foundations will be critical in training the next-generation of scientists in this field.
  2. b) Data accessibility and reproducibility: Given its expansive nature and the inherent interindividual variation of non-genetic factors, full characterization of the exposome and associations between exposures and disease may require large international teams of researchers that have central access to the expansive, complex datasets that are generated. Unlike the human genome, the dynamic nature of the internal and external environment will require extensive reproduction and validation both within and across different populations.
  3. c) Funding and defining value: Fully implementing the exposome paradigm from an epidemiological research perspective would likely require profound investments in study infrastructure and laboratory technology. The discontinuation of the National Children’s Study, which originally intended to enroll and follow 100,000 children from birth to 21 years of age in the United States, illustrates the challenges associated with conducting large longitudinal research projects. These demands would need to be balanced with justifying the added value and potential for future utility along the same lines as genomics. The comprehensive understanding of non-genetic determinants of disease risk from a research standpoint, however, is the natural precursor to any discussion of utility.
  4. d) Communication of research findings: The field of genomics has matured to the point that consumers can now obtain personalized reports and risk profiles of their genome from companies like 23andMe and It is theoretically possible that this commercial model could be extended in the future to other types of biomarkers such as the metabolome, yet the dynamic nature and current lack of clarity regarding the disease relevance of most non-genetic biomarkers would create considerable challenges in interpreting and conveying the data.


The underlying principles of the exposome were originally conceived by Dr. Wild as a mechanism to identify non-genetic risk factors for chronic diseases in epidemiologic studies. While the increasing number of exposome research initiatives are primarily focused on this scientific goal, challenges remain in the implementation. It is likely too early to project what the future public health and/or clinical utility of this paradigm, if any, may be. Nevertheless, continued investments in this area of research are critical to understand the “missing pieces” of disease etiology and to ideally inform preventive measures and/or disease management in the future.  


Have an interesting science policy link? Share it in the comments!


Written by sciencepolicyforall

November 21, 2018 at 9:55 pm

Science Policy Around the Web – November 16, 2018

leave a comment »

By: Ben Wolfson, Ph.D.


Source: Pixabay


F.D.A. Seeks Restriction on Teens’ Access to Flavored E-Cigarettes and a Ban on Menthol Cigarettes

Stricter regulation on E-cigarettes by the Food and Drug Administration (FDA) has been rumored since last week, when market leading E-cigarette company Juul Lab stopped accepting orders for some if it’s most popular flavored products. In an announcement on Thursday of this week the FDA said that while it would allow stores to continue to sell flavored E-cigarettes, they would have to be restricted to areas that are inaccessible to minors.

In the same announcement, the FDA stated that it will move to ban additional flavored tobacco products, menthol cigarettes and flavored cigars. Flavored tobacco products are disproportionately used by young people, with a recent study finding that over 80% of youth and young adult tobacco users reporting using flavored tobacco. The same study also reported that 75% of youth tobacco users said that they would stop using tobacco if it was not flavored. These trends are exactly why the FDA has moved for new regulation. While youth use of combustible tobacco products has dropped, people who try E-cigarettes are more likely to use combustible in the future.

The exact way new regulations will be applied remain to be determined, and public health advocates have indicated disappointment that the FDA did not announce and outright ban. Age restrictions are already in place or tobacco products, and many underage individuals get tobacco from older people as opposed to stores illegally selling them.

For these same reasons, the ban on menthol cigarettes and flavored cigars is being lauded by advocates, especially in the African-American community where use of these products is especially high, and restrictions have been fought by the tobacco industry for years.

(Sheila Kaplan and Jan Hoffman, New York Times)

Offering free DNA sequencing, Nebula Genomics opens for Business. But there’s an itsy-bity catch

As personal genome sequencing becomes accessible and popular, so do the privacy concerns related to it. While individuals may choose to get their genome sequenced for recreational purposes, the data generated is highly valuable and of great interested to researchers, companies and law enforcement individuals, an evolving paradigm which was recently delved into in more detail on this blog. Individuals who sequence their genomes have the opportunity to share their (anonymized) data with researchers, however these systems remain one-sided and simplistic.

While AncestryDNA and 23andMe are currently the most popular companies for personal sequencing, a new genetics company run by famed geneticist and genome engineering/synthetic biology pioneer George Church, recently announced plans to enter the market. Church’s company, Nebula Genomics, plans to offer genome sequencing for a range of costs. Those who wish to opt out of sharing any information will pay $99 for genome sequencing, however the information provided will be low resolution. If the customer opts-in to sharing data the test will be free, and the accuracy of the data will be increased.

Regardless of whether they choose to answer questions about themselves, both free and paying costumers will still be able to refuse to share data with researchers. While other companies have an “all-or-nothing” approach to data sharing, Nebula will allow customers to audit data requests on a case-by-case basis. Any data shared will remain anonymized. Church stated that individuals with especially unique genetic traits that a company wants to study would even receive payment for their data. This approach would give people back control of their data, and is a push-back against the current system where companies control all data and the profits gathered from it.

(Sharon Begley, Stat News)

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

November 16, 2018 at 4:33 pm

Insect Allies and the role of DARPA in scientific research

leave a comment »

By: Ben Wolfson, Ph.D.


Source: Pixabay


Last month, a Pentagon research program called Insect Allies burst into the public conversation after a team of research scientists and legal scholars published an article detailing their concerns and critiques of the project in Science magazine. Insect Allies is run by the Defense Advanced Research Projects Agency (DARPA), and was announced in 2016 with the stated goal of “pursuing scalable, readily deployable, and generalizable countermeasures against potential natural and engineered threats to the food supply with the goals of preserving the U.S. crop system”. As indicated by its eponymous project name, the Insect Allies research program seeks to develop insects that carry gene editing viruses, allowing for rapid genetic modification of plant food sources. The Insect Allies program exemplifies both the pros and cons of DARPA work. The described project leapfrogs current technological paradigms, promoting a next stage of synthetic biology work. However at the same time, it seeks to create a technology with problematic potential military applications. The battle between basic research and the development of military technologies is one that has dogged DARPA since its inception. As the theoretical and empirical knowledge in the fields of genetic modification and synthetic biology improve, it is imperative that novel technologies are developed with the appropriate ethical and moral oversight and that scientists consider the ramifications of their work.

Origins and Changes of DARPA

Science and the military have long been interwoven, a process that was formalized in the U.S. in the past century. In 1947, President Truman created the Department of Defense, in part to fund scientific research. A decade later President Eisenhower highlighted the importance of science in national defense with the creation of the Advanced Research Projects Agency (renamed DARPA in 1972). DARPA’s creation was in direct response to the launch of Sputnik by the Soviet Union, and given the mission statement of “preventing technological surprises like Sputnik, and developing innovative, high-risk research ideas that hold the potential for significant technological payoffs”.

In its early years, DARPA funded significant amounts of basic and foundational research that did not have immediate applications. However, in 1973 Congress passed the Mansfield Amendment, preventing the Defense Department from funding any research without “a direct and apparent relationship to a specific military function or operation”. The amendment was contentious at the time of its passing, with Presidential Science Advisor Lee DuBridge telling a congressional subcommittee that the amendment had negatively affected the quality of research projects because it is not possible to prove the relevance of a project, and therefore it is wrong to prevent an agency from funding basic research it sees as valuable. Passage of the amendment fundamentally reshaped the U.S. research funding landscape, and projects consisting of upwards of 60% of DOD research funds were cancelled or moved to other agencies. In place of basic research DARPA has shifted to funding research with direct military applications. These projects have often fallen into the realm of “dual-use” technologies, having both civilian and military uses. Successful examples of this strategy include funding projects that evolved into the internet and Global Positioning Systems (GPS). Current research span from projects with clear civilian applications, such as a multitude of projects researching the next generation of medical technologies, to those that are weapons research with purely military potential.

The Insect Allies program

Agriculture is one of the predominant industries in the U.S., making the U.S. a net exporter and world’s largest supplier of a variety of agricultural products including beef, corn, wheat, poultry and pork. The importance of American agriculture to both national security and the security of its global allies and trade partners is well recognized by national security officials, especially in the context of climate change and the potential for growing scarcity. The primary threats to agriculture are disease and weather related events. While these can be mitigated through pesticides, clearing of crops, quarantine, and selective breeding, current strategies are both destructive and time consuming.

The Insect Allies program has three focus areas; viral manipulation, insect vector optimization, and selective gene therapy in mature plants. Through application and combination of these technologies Insect Allies would function by genetically modifying already growing plants through utilization of “horizontal environmental genetic alteration agents (HEGAAs). Traditionally, genetic modification involves changing the genes of a parent organism and propagating its offspring. This process is essentially the same as the selective breeding practiced in agriculture for generations. While this is effective, it is a time-consuming practice as you must breed successive generations of your population of interest.

Through HEGAAs, Insect Allies completely revamp the process. Instead of creating a population of interest from scratch, HEGAAs allow scientists to modify an existing population. If you wanted to create a pesticide-resistant crop, the traditional strategy would be to insert the gene for pesticide resistance into one plant and then collect its seeds and use them to grow an entire field of pesticide resistant plants. With HEGAA technology, farmers could make an already grown field resistant by modifying each individual plant on a broad scale.

Criticism of the Insect Allies program

The authors of the Science article critique the Insect Allies program over a variety of issues, ranging from biological to ethical or moral dilemmas. The article raises issue with both the use of wide-scale genetic modification technologies as well as with the application of insects as vectors as opposed to already existing technologies such as overhead spraying. The use of wide-scale genetic modification is a line which has yet to be crossed, and currently lacks a regulatory path. While research into gene modifying technology is ongoing and real-world tests inevitable, these tests are a contentious issue that is currently being debated. Moreover, agricultural products modified by HEGAAs have no current path to the market. The combination of seemingly little thought in the program towards the regulation that would be necessary for the described application of their technology as well as the existence of lead the authors to suspect that Insect Allies is being developed for other means. While a population of gene-modifying insects could be used to help U.S. crops survive weather-changes or pesticides, they could also potentially be applied to crops of other nations in war. Biological Weapons were banned in 1972, and currently no nations have (publicly) developed them.While the technologies being developed by Insect Allies are described as “for peaceful means”, the stated goals are achievable through already existing technologies. Furthermore, international competition with Insect Allies may accelerate crossing the line between wartime and peacetime technology.

Soon after publication of the Science article, Dr. Blake Bextine, program manager for Insect Allies, released a statement refuting many of these points. He stated that DARPA moved into agricultural work as it is an important aspect of both national and international security, and that the work falls under DARPA’s charter to develop fundamentally new technologies that leapfrog existing capabilities. Moreover, he affirmed that Insect Allies has no plan for open release, and that regulatory systems would be developed and had been planned since the start of the program.

What does the future hold

The Science article’s authors note that they would be worried about Insect Allies whether it was under a civilian or military purview, but it is impossible to ignore the implications of synthetic biology and genetic modification research to the military. DARPA’s strategy of generating high-risk, high-reward research is both effective and engrained into the DNA of the organization, however so is the fact that DARPA is a defense organization.

When DARPA was founded (as ARPA), it was to promote high-risk scientific research that would increase U.S. soft power internationally. After the Mansfield amendment, these goals were shifted towards applied research instead of basic, and with them a focus on defense-oriented research. An advantage of basic research is that it takes time to develop, allowing the findings, and their ramifications, to percolate throughout the global scientific community. The quintessential example of this is regulation of recombinant DNA technologies. Soon after recombinant DNA technology was developed, the 1975 Asilomar Conference was held to establish voluntary guidelines that would ensure the safety of a game-changing scientific technology. As synthetic biology technological development has accelerated, the discussion around the regulation of synthetic biology and genetic modification technology has also begun, and is currently ongoing.

While it is impossible to argue with the massive benefits that civilian applications of DARPA developed technologies have provided, synthetic biology and genetic modification technologies have the potential to enact immense changes globally. The environment and application of a technology has a huge potential to influence its use and the way it is viewed by the public for generations. Insect Allies program states that it is focusing on developing insect-based HEGAAs technologies as a means of pushing development of gene-editing technologies to increase food security in a transparent manner that promotes open published research. It is critical that the Insect Allies program is held to this standard, and that regulation by the global scientific community is allowed to impact the direction and application of these potentially game-changing technologies.



Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

November 15, 2018 at 11:22 am

Unlinking databases is not enough to unlink identity from genetic profiles

leave a comment »

By: Allison Dennis B.S.


Source: Pixabay


The efficacy of law enforcement is an issue of public safety. Advances in medicine are a matter of personal wellbeing. Knowing more about one’s unique genetic heritage is a point of curiosity. As all of these spheres delve further and further into DNA sequencing, the ubiquity of personal genetic information is increasingly becoming an issue of privacy. The emerging nature of DNA technology has left us with three major types of DNA databases separated by their use: medical, forensic, and recreational. Each is governed by its own sets of rules, set by federal law, state law, and user agreements. Under specific circumstances data can be intentionally shared for other uses. However, the technological limitations that kept these databases separated in the past may be nearing erosion.


By congregating and comparing the genomes of people with and without a specific disease through DNA databanks, researchers can discover small glitches in the DNA of affected patients. Identifying the genetic changes that disrupt the normal functions of the body allows researchers to begin designing therapeutics to correct deficiencies or developing genetic tests to diagnose specific diseases, possibly before symptoms have appeared. The potential for medical databases have prompted government led initiatives such as All of Us to amass genetic information from a diverse group of 1 million Americans, which will be queried for medical insights. Already, the Cancer Genome Atlas, maintained by the US National Institutes of Health, contains deidentified genetic data from tumor and normal tissues from 11,000 patients and is openly searchable for research purposes. Foundation Medicine, a private company that provides doctors and patients with genomic profiles of tumor samples to inform treatment options, has stockpiled data from over 200,000 samples. Foundation Medicine shares these data through collaborative agreements and business partnerships with members of the oncology research community and pharmaceutical companies.

Medical DNA databanks, while masking a patient’s name, may link to an individual’s medical history. Because researchers often do not know what parts of the genome will reveal key clues, the genetic data contained in these databases is rich. Often researchers look at how the frequency of single nucleotide changes at hundreds of thousands of places in the genome differ between people affected and unaffected by a particular disease.

The medical benefit of compiling and sharing genomic information is carefully balanced against privacy concerns by Federal regulation. The Genetic Information and Nondiscrimination Act of 2008 (GINA) prohibits employers and health insurers from requesting access to an individual’s or family’s genetic information. The Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule mandates that health-care providers not disclose an individual’s genetic information. The NIH Genomic Data Sharing Policy limits access to individual-level genetic information held in their databases, including the Cancer Genome Atlas, to approved scientific researchers. Despite these safeguards, genetic information contained within medical databases can be identified and provided to law enforcement following a court order in extreme cases.


Forensic DNA databases contain searchable genomic profiles for the critical task of identification by law enforcement and military experts. U.S. Federal law allows law enforcement officers to collect and store DNA profiles on anyone they arrest, including those detained by immigration enforcement. Since 1998, the Federal Bureau of Investigation has hosted the national Combined DNA Index System (CODIS), which currently contains 16.8 million offender and arrestee profiles. Unlike medical databases, which can contain a wealth of information, CODIS profiles are limited to a set of 20 places in the genome where the number of times a small sequence of DNA is repeated varies between individuals. The unique combination of these 20 lengths place the probability of two unrelated people sharing a profile at roughly 1 in 1 septillion, and were intentionally selected to not reveal any medically relevant parts of the genome.

The creation of CODIS was authorized by Congress through the DNA Identification Act of 1994, which mandated privacy protection standards. As a safeguard, the database profiles are associated with specimen identification numbers rather than any personal information. The system can only be accessed in physically secure spaces and is restricted to use by criminal justice agencies specifically for the purpose of law enforcement. Only after a match has been found from a query and the candidate match has been confirmed by an independent laboratory will the identity of the suspected perpetrator be revealed, and even then only to the agencies involved in the cases. The Scientific Working Group on DNA Analysis Methods (SWGDAM) continues to recommend revisions to these standards for security and confidentiality issues. Despite housing a relatively unrevealing type of genetic information, CODIS goes above and beyond the privacy protections provided by many recreational and medical databases.


Individuals are increasingly turning to direct-to-consumer genetics testing, driven by their curiosity to discover their genetic heritage and to gain some insight into their genetic traits. These tests contain a wealth of information drawn from single nucleotide changes across more than 500,000 parts of the genome. The most popular tests are offered by AncestryDNA and 23andMe, who manage data according to Privacy Best Practices established by the industry. These practices include removing names and demographic identifiers from genomic records, storing identifying information separately if retained, using encryption, limiting access, and requiring consent for third party sharing. As the records are presumed to contain medically relevant information, all identified samples are governed by the same HIPAA and GINA regulations that govern medical tests. 23andMe has amassed a database of over 5 million genetics profiles. AncestryDNA has over 10 million, greatly rivaling the size of forensic and medical databases.

Direct-to-consumer genetics testing companies often sell de-identified genetic data to pharmaceutical and diagnostic development companies for research purposes. Those that follow the Privacy Best Practices established by the industry only do so for users who have consented to participate in research, and GINA expressly prohibits these companies from sharing an individual’s genetic information with potential employers or health insurers.

There are also limits to prevent law enforcement from abusing recreational genetics testing companies. While there is the potential for someone to submit a sample that is not their own, the AncestryDNA service agreement stipulates that users only provide their own sample, and 23andMe expressly disallows “law enforcement officials to submit samples on behalf of a prisoner or someone in state custody.” Moreover, their tests have been specifically designed to make collection of a third parties’ sample difficult. For instance, the 23andMe test requires an amount of saliva needing 30 minutes to generate, preventing illicit collection.

While companies go to great lengths to protect the information contained in their databases, most companies will provide individuals with their own complete profiles when requested. The allure of mapping family connections has lead millions of genealogical hobbyists to openly contribute their re-identified genomic DNA to searchable online databases. The most famous searchable database is GEDmatch, which currently contains about one million profiles. The platform allows users to upload their own genome to retrieve high probability matches of other user’s profiles. A level of privacy is maintained by only sharing small pieces of the genome, allowing complete profiles to remain obscured. However, GEDmatch’s user agreement emphasizes that rather than use encryption, they store data in a format that “would be very difficult for a human to read” and allow volunteers access to the data. Additionally, they specifically welcome “DNA obtained and authorized by law enforcement” for inclusion in their database. The wealth of information publicly hosted on sites like GEDmatch have provided a unique opportunity for other types of DNA databanks to share information and blur the lines of privacy.

Database Cross-Linking

The use of GEDmatch by law enforcement marks an important seachange in genetic privacy. In the past, medical and recreational databases were only occasionally queried by law enforcement, who were seeking specific profiles. However in April 2018, in a desperate search for leads to solve a cold case, law enforcement officers utilized a nearly 40-year old rape-kit to develope a genetic profile. While previous searches over the decades had been limited to the FBI database and the perpetrator’s 20 CODIS loci law enforcement officials were able to undertake a blind and expansive search by uploading the complete profile to the GEDmatch database, which ultimately lead to a third cousin of the man who would be charged with 12 murders.

These types of searches have the power to exonerate or implicate criminals, as a 100% match is undeniable. While only just starting to be used, for someone of European ancestry living in the United States the odds are as high as 60% that a genetic relative can be identified from a database similar to GEDmatch. A public opinion poll conducted shortly after April 2018, revealed that the majority of respondents approved of searches of recreational databases by law enforcement, especially to identify perpetrators of violent crimes.

Scientists have already laid the theoretical groundwork that could allow law enforcement to link a suspect’s profile in a medical or recreational database using the limited 20 CODIS markers from a crime-scene sample. Portions of the genome that share close physical proximity along a chromosome are more likely to be inherited together, allowing statistical predictions to be made about which pieces are most likely to occur together. Although the two types of profiles do not contain the same markers, scientists can predict which marker profiles most likely came from the same individual.

While the use of these tactics might be supported for the purpose of identifying violent criminals, it also puts medical privacy at risk. Despite the de-identification of genomic profiles, scientists have demonstrated reasonable success in tracking down a person’s identity given a genetic profile, a genealogical database such as GEDmatch, and information on the internet.

As DNA databases develop in their depth of information and coverage of individuals, the ability to link records to individuals grows. A lack of compatibility will not be enough to keep medical genomic information sequestered from criminal profiles. Industry standards and user agreements must be discussed and strengthened to safeguard the genetically curious.


Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

November 8, 2018 at 10:14 am

Science Policy Around the Web – April 18, 2017

leave a comment »

By: Nivedita Sengupta, PhD

Source: pixabay

DNA Testing

23andMe Given Green Light to Sell DNA Tests for 10 Diseases

On April 6th, the US Food and Drug Administration (FDA) approved the first at-home genetic test kits, which can be sold over the counter in pharmacies, to determine the risk of developing certain genetic diseases. Since 2006, 23andMe, a company based in California, has been analyzing DNA from saliva samples of its customers to provide genetic insights into their risk of developing 240 different diseases and disorders. However, in 2013, FDA was concerned about customers using test results to make medical decisions on their own, and ordered 23andMe to halt the service. In 2015, FDA eased some of the restrictions and allowed the company to reveal to their customers only the information regarding genetic anomalies that can be transferred to their children, and not any information about the person’s own disease risk.

23andMe now has permission to inform its customers about genetic mutations that are strongly associated with a small group of medical conditions such as Parkinson’s disease, late-onset Alzheimer’s disease, celiac disease and a hereditary blood-clot disorder called thrombophilia. However, it should be noted that the results from these tests are not equivalent to a medical diagnosis, as the development of a disease is also influenced by a person’s family history, lifestyle and environment.

The decision made by the FDA paves the way for a wave of do-it-yourself diagnostic tests, which will be flooding the market in the coming years. “It’s a watershed moment for us and the FDA,” says Kathy Hibbs, chief legal and regulatory officer at 23andMe. However, there are concerns regarding the limits of medical knowledge among common people to understand and interpret the results and the limitations of these tests, which could lead to misinterpretation of the results and complications. (Amy Maxmen, Nature News)

Neonatal Care

Giving Newborns Medicine is a Dangerous Guessing Game. Can We Make it Safer?

Medical emergencies in neonates are on the rise. It might be surprising for many parents to know that 90% of the medications administered in a neonatal intensive care unit are not medically approved by the FDA for use in newborns. Neonates are routinely treated with drugs that are not adequately tested for safety, dosing, or effectiveness. This is a global problem, and many factors contribute to it. Firstly, parents and doctors are afraid of enlisting newborns in clinical trials. Secondly, pharmaceutical companies are afraid to test drugs on neonates as the risk of liability is very high. It is also a small market, so pharmaceutical companies may not make money by getting drugs approved for neonates.

In 2015, an FDA funded nonprofit organization launched two global efforts to encourage clinical trials in newborns. One of which is the International Neonatal Consortium (INC), which published a guide to clinical trials in neonates last year. Dr. Jonathan Davis, Director of INC said, “We’ve got to do something.” Without information on drug data for newborns, “we can’t be certain which drugs, in which doses, to use when.” Under the current system, doctors are making decisions based on either anecdotes or intuition, which essentially means that every sick newborn is an uncontrolled, unapproved study without the guarantee of seeing improvement. No data collection is done, thus not providing any information for treating other infants around the world.

Physicians often take decisions by scaling down from how medications are used in adults. But this can be fatal and lead to disasters as we have seen in the past, with the use of the antibiotic chloramphenicol in the 1950s, and the preservatives benzyl alcohol and propylene glycol in the 1980s. Infants are not tiny adults, and they adsorb, metabolize, and excrete drugs in different ways than adults. The majority of studies done in neonates in recent years have not been able to establish efficacy. More studies need to be done, and this requires proper designing of clinical trials with reduced risk, and eliminating unnecessary interventions. (Megan Scudellari, STATNews)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

April 18, 2017 at 10:45 am

Science Policy Around the Web – February 19, 2016

leave a comment »

By: Fabrício Kury, MD

Affordable Care Act

Obamacare supporters don’t like talking about it — but the individual mandate is working

Among the many goals of the enormous piece of legislation that is the Patient Protection and Affordable Care Act (PPACA), one is to deliver universal health care access in a nation that often ranks no. 1 in the ranking of cost of health care per capita, and without flirting with socialist-minded models such as single-payer health care that would have failed to pass in Congress. Because it is so expensive, health care in the U.S. is largely paid for via health insurances, which act as pools of the risk of needing healthcare and dilute the cost among all insured individuals. However, in a free capitalist market, the health insurances suffer from the fundamental problem of adverse selection, in which only the people who need health care purchase insurance, while those who are mostly healthy opt not to. This becomes a “death spiral” that leads to financial insolvency of insurance companies even despite them going to extreme lengths in denying insurance coverage to individuals expected to be costly. To address this, the PPACA prohibits insurance companies from denying coverage on the basis of pre-existing conditions and at the same time addresses the problem of adverse selection by making it mandatory that everyone (with few exceptions) must have health insurance or otherwise face a financial penalty – the so-called “individual mandate”. Recent statics on enrollment in the ACA show that the financial penalty aspect is working to encourage young, otherwise-healthy people to sign up, exactly as it was intended. (Sarah Kliff, Vox)

Technology and Health Care Policy

When Software Tries to Eat Regulation

In the era of disruptive innovation, billion-dollar unicorns, there-is-an-app-for-that mindset, it is no surprise to hear that ”every smart tech person I know is working in healthcare,” the $3 trillion industry that occupies more than $1 out of every $6 spent in the entire U.S. economy. Underpinning digital revolutions such as Uber, Airbnb, Spotify, even Wikipedia, lies the concept of delivering value in a dramatically rethought manner that longstanding behemoth corporations fail to compete with. Health care, however, cannot be provided by a team of youngsters in a garage because what is at stake is more serious than whether or not you get to find a cab when you need one. Health care is delivered amid walls of regulations that protect patients and assign liability, and health care consumers are not necessarily looking forward to risking security in favor of imaginative, cheaper alternatives. Since 2013, the Food and Drug Administration (FDA) has laid out regulation for responsible innovation in mobile health and followed up final guidance this year, while the HHS Office for Civil Rights offers guidance on adhering to HIPAA for health app developers. In this article, the examples of Zenefits, Theranos and 23andMe demonstrate that the FDA has consistently made clear that the “Ubers” of health care must exist within the same legal framework that safeguard the existing U.S. health care delivery models. (Erin Griffith, Fortune)

Fee-for-Service Heathcare

The Hidden Financial Incentives Behind Your Shorter Hospital Stay

In basically any U.S. market, if you purchase a product and it breaks too soon, you either get a new one or you receive your money back. In U.S. health care, though, up until 2012 if a patient was discharged from a hospital, but soon had to be re-admitted due to a preventable problem such as a poorly disinfected surgical wound, the hospital profited again from the new patient admission. The 2012 Medicare’s Hospital Readmission and Reduction Program, part of the Patient Protection and Affordable Care Act (PPACA), financially penalizes facilities that fail to meet historical measures of what is considered an acceptable rate of re-admissions, but this has been bringing the adverse effect of “workflow gymnastics” to make patients not be re-admitted or at least, not get counted as so. Another approach, the Bundled Payments for Care Improvement initiated in 2013, extends the concept of a single payment per diagnosis to include all care needed by the patient including out-of-hospital care. While these approaches seem to have been successful, they are still built on top of the fee-for-service rationale, where health care is paid for by the number (volume) of treatments provided. The American Hospital Association (AHA) affirms there exists considerable agreement that fee-for-service is one of the major culprits in the decades-old unrelenting upward trend in the percentage of the U.S. gross domestic product that is spent on health care. The opposite model of fee-for-service is capitation, where providers are paid a fixed price to provide all care to a group of individuals regardless of the volume of the care provided. The ACA has made capitation a possible alternative for some types of Accountable Care Organizations, however it is not mandatory, the programs are still temporary, and their details must evolve from the failed capitation models of the 1990s. (Austin Frakt, The New York Times)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

February 19, 2016 at 9:00 am

Science Policy Around the Web – November 10, 2015

leave a comment »

By: Daniël P. Melters, PhD.

Infectious disease

Cattle trial cuts human sleeping sickness

In addition to HIV and malaria, sleeping sickness is another serious infectious disease causing major health problems in sub-Saharan Africa, resulting in many thousands of infections each year. In total over 65 million people are at risk of infection. The disease is caused by the protozoan parasite from the genus Trypanosoma, where Trypanosoma brucei gambiense accounts for more than 98% of all reported cases. The parasite is transmitted via tsetse flies. The people most affected by this parasite live in rural areas, where they are in close contact with life-stock. These life-stock hold an important step in the life-cycle of Trypanosoma. To make matters worse, diagnosis and treatment require specifically skilled staff, resulting in only about 30% of all infected individuals receiving treatment following a diagnosis.

A collaboration between the University of Edinburgh (UK), Makerere University (Uganda), and the Ugandan government has tried to tackle the problem by injecting 500,000 cows with a parasite killing agent in addition to regular fumigation with insecticide to qualm the number of tsetse flies. The number of people diagnosed with sleeping sickness went down by 90%. Following this successful trial the program will be expanded to cover the whole of Uganda, including the treatment of 2.7 million cattle. (SciDev.Net)

Precision Medicine Initiative

Privacy Risks from Genomic Data-Sharing Beacons

One of the corner stones of President Obama’s Precision Medicine Initiative is the broad sharing of medical data between many scientists, albeit in a responsible manner. In their recent report, the NIH Precision Medicine Initiative Cohort Program (PMI-CP) workgroup advised the creation of a “hub-and-spoke” model that has a Coordinating Center to provide safeguards to facilitate data access, data normalization, and participant engagement. Part of this dataset is genomic data from patients. One major concern about genomic and genetic data is that this can be used to identify the donor, even when the genomic data is made anonymous early on. A recent article by Shringarpure and Bustamante in the American Journal of Human Genetics provides evidence that it is not only possible to re-identify to whom an anonymous genetic ‘beacon’ belongs to, but also identifies their relatives with just 1000 single-nucleotide polymorphisms (SNP)s. A beacon is a web server that answer allele-presence queries in a binary manner. This might pose a serious privacy-concern for potential participants in the PMI-CP. This concern is not limited to the PMI-CP either. Recently the American Association for Cancer Research (AACR) rolled out their Project GENIE where US and European research institutes will share their cancer genomes to catalyze the development of more precise cancer treatments. Nevertheless, Shringarpure and Bustamante do make several suggestions to continue to safeguard patient privacy. (American Journal of Human Genetics)

Direct-to-Consumer Genetics

Another Genetic Testing Company in Hot Water with the FDA

In November 2013, the US Food and Drug Administration (FDA) warned the direct-to-consumers health testing company 23andMe that they needed to comply with federal regulations with respect to approval for medical devices (section 201(h) of the Federal Food Drug and Cosmetics Act). 23andMe offered a saliva-based genetics test that provided participants with an ancestry-based analysis of some of genetic markers, in addition to various health-related genetic variations (SNPs). The FDA is of the opinion that the latter one required approval by them as a medical device. Seven months after their warning, the FDA received an application from 23andMe. Recently, they obtained the federal seal of approval for a few of their health-related genetic tests.

23andMe is maybe the most well known of these direct-to-consumers genetic testing companies, but they are certainly not the only ones. On November 2nd, the Louisiana-based DNA4Life Company received a similar notification from the FDA. Just like 23andMe, DNA4Life has held the position that they do not need FDA approval to sell their genetic test kit. However, the FDA maintains that the genetic test, which predicts how patients will respond to 120 of the most common medication, meets the definition of a “medical device” and requires that the company either provide evidence of FDA approval or present why they do not need approval. DNA4Life has not yet publicly responded to the FDA notification.

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

November 10, 2015 at 12:00 pm