Science Policy For All

Because science policy affects everyone.

Posts Tagged ‘genomics

Science Policy Around the Web – March 06, 2017

leave a comment »

By: Liu-Ya Tang, PhD

Source: pixabay

Technology and Health

Is That Smartphone Making Your Teenager’s Shyness Worse?

The development of new technologies, especially computers and smartphones, has greatly changed people’s lifestyles. People can telework without going to offices, and shop online without wandering in stores. While this has brought about convenience, it has also generated many adverse effects. People tend to spend more time with their devices than with their peers. Parents of shy teenagers ask, “Is that smartphone making my teenager’s shyness worse?”

Professor Joe Moran, in his article in the Washington Post, says that the parents’ concern is reasonable. The Stanford Shyness Survey, which was started by Professor Philip Zimbardo in the 1970s, found that “the number of people who said they were shy had risen from 40 percent to 60 percent” in about 20 years. He attributed this to new technology like email, cell phones and even ATMs. He even described such phenomena of non-communication as the arrival of “a new ice age”.

Contrary to Professor Zimbardo’s claims, other findings showed that the new technology provided a different social method. As an example, teenagers often use texting to express their love without running into awkward situations. Texting actually gives them time and space to digest and ponder a response. Further, Professor Moran said that the claim of Professor Zimardo was made before the rise of social networks;  shy teenagers can share their personal life online even if they don’t talk in public. He also talks about the paradox of shyness, where shyness is caused by “our strange capacity for self-attention”, while “we are also social animals that crave the support and approval of the tribe.” Therefore, new technologies are not making the shyness worse, in contrast social networks and smartphones can help shy teenagers find new ways to express that contradiction. (Joe Moran, Washington Post)

Genomics

Biologists Propose to Sequence the DNA of All Life on Earth

You may think that it is impossible to sequence the DNA of all life on Earth, but at a meeting organized by the Smithsonian Initiative on Biodiversity Genomics and the Shenzhen, China-based sequencing powerhouse BGI, researchers announced their intent to start the Earth BioGenome Project (EBP). The news was reported in Science. There are other ongoing big sequencing projects such as the UK Biobank, which aims to sequence the genomes of 500,000 individuals.

The significance of the EBP will greatly help “understand how life evolves”, says Oliver Ryder, a conservation biologist at the San Diego Zoo Institute for Conservation Research in California. Though the EBP researchers are still working on many details, they propose to carry out this project in three steps. Firstly, they plan to sequence the genome of a member of each eukaryotic family (about 9000 in all) in great detail as reference genomes. Secondly, they would sequence species from each of the 150,000 to 200,000 genera to a lesser degree. Finally, the sequencing task will be expanded to the 1.5 million remaining known eukaryotic species with a lower resolution, which can be improved if needed. As suggested by EBP researchers, the eukaryotic work might be completed in a decade.

There are many challenges to starting this project. One significant challenge is sampling, which requires international efforts from developing countries, particularly those with high biodiversity. The Global Genome Biodiversity Network could supply much of the DNA needed, as it is compiling lists and images of specimens at museums and other biorepositories around the world. As not all DNA samples in museum specimens are good enough for high-quality genomes, getting samples from the wild would be the biggest challenge and the highest cost. The EBP researchers also need to develop standards to ensure high-quality genome sequences and to record associated information for each species sequenced. (Elizabeth Pennisi, ScienceInsider)

Have an interesting science policy link?  Share it in the comments!

Advertisements

Written by sciencepolicyforall

March 6, 2017 at 8:41 am

The Debut of Health Care Data Science

leave a comment »

By: Fabrício Kury, M.D.

Image source: MedCityNews.com

It is easy for a millennial – a person born between mid-1980’s to late-90’s – to be unaware of just how young the current methods used in health care research really are. Controlled randomized clinical trials (RCT), dating only from the late 40’s, are probably younger than most millennial’s grandparents. Case-control methodology and Kaplan-Meier curves only originated in the 1950’s, while meta-analyses were only accepted by medical researchers in the late 70’s. Step into the 80’s and early millennials are as old, if not older, than propensity scores and the concept that is today called cost-effectiveness research. The term “Evidence-Based Medicine” is as young as a millennial born in the early 90’s, while the late 90’s and 2000’s saw the explosion of genomics, proteomics, metabolomics, and other -omics research. Finally, the 2010s so far might be credited for when the term Data Science (“the fourth paradigm of science“) gained widespread notoriety, and established its modern meaning as – long story made short – the practice of producing knowledge out of data that had been created for other purposes.

While the second half of the 20th century transformed health care research into an ever more rigorous and technology-driven science, it also saw the cost of the health care sector of the U.S. unrelentingly grow from a comfortable 5% of the Gross Domestic Product in 1960 to a crushing 18% in 2015. Medical bills have become the leading cause of personal bankruptcies in the nation, while life expectancy, as well as other basic health indicators, depicted a country nowhere close to getting a similar bang for each buck as other developed nations. In 2009, the Obama administration prescribed to the health care sector a remedy that had previously brought efficiency and cost savings to every industry it had previously touched: information technology. The Health Information Technology for Economic and Clinical Health (HITECH) Act (part of the American Recovery and Reinvestment Act of 2009) literally gave away as much as $36.5 billion of taxpayers’ money to hospitals and physician practices for them to buy and “meaningfully use” electronic health records (EHRs). This outpouring of money was overseen by the Office of the National Coordinator of Health Information Technology (ONC), which had existed since 2004 as a presidential Executive Order, but became solidified as a legislative mandate via HITECH. This act fiercely transitioned the country from mostly paper-based health care in 2008 to near-universal EHRs adoption by 2015, giving electronic life, and potential reuse for research, to streams of health data previously dormant in paper troves.

Moreover, in March, 2010, the Patient Protection and Affordable Care Act (PPACA, a.k.a. “Obamacare”) was signed into law and, among so many other interventions, secured a few hundred million dollars for the creation of the Patient-Centered Outcomes Research Institute (PCORI). The mission of the PCORI is to do research that responds directly to real-life concerns of patients. For that purpose, among the first initiatives by the PCORI was the creation of PCORnet, a network of institutions capable of providing electronic health data for research. Most recently, in January 2015, President Obama announced the Precision Medicine Initiative (PMI). The PMI seeks to craft a nationwide and representative cohort of 1 million individuals, from whom a wealth of health data will be collected with no definitive goal besides to serve as a multi-purpose prime-quality dataset for observational electronic research. Meanwhile, private sector-led initiatives such as Informatics for Integrating Biology and the Bedside (i2b2) and Observational Health Data Sciences and Informatics (OHDSI) were also launched with the mission to access and do research on health care’s big data, and their publications can be easily found in PubMed.

These initiatives depict a political and societal hope – or hype? – that information technology, among its other roles in health care as whole, can make health care research faster, broader, more transparent, more reproducible, and perhaps also closer to the everyday lives of people. One premise is that by using existing EHRs for research, instead of data collected on-demand for a particular study, the researcher gets closer to the “real world” individuals that ultimately receive the treatments and conclusions produced by the study. In traditional clinical trials and other studies, the patients who participate are highly selected and oftentimes remarkably unrepresentative of the general population. Moreover, in EHR-based research there is also the potential to investigate more individuals than any previous method could possibly attempt. This broader reach makes rare conditions (or combinations of conditions) not so rare that they cannot be readily studied, and allows subtler variations in diseases to become detectable. On top of that, these studies can be done at the speed of thought. De facto, electronic health records-based clinical research has been recently published in the Proceedings of the National Academy of Sciences (PNAS) and evinced to be feasible at international, multi-hundred million patients scale at a breathtakingly swift time span. Altogether, one can sense in this picture that the millions of dollars spent on HITECH, PCORnet, PMI, and the NIH’s Data Science research grants might not have been just unfounded hype.

The relationship of IT and health care must, however, recognize its rather long history of frustrated expectations. In 1968, for example, Dr. Laurence Weed – the father of today’s prevailing paradigm of patient notes – predicted that in the future all text narratives present in electronic health records would be entered in structured form that enables scientific analysis. Today, to say the minimum, we have become less confident about whether such change is feasible or even desirable to begin with. In 1987, Barnett and colleagues believed that “relatively simple computational models” could be used to construct “an effective [diagnostic] assistant to the physician in daily practice” and distributed nationwide, but such assistant is yet to arrive at your physician’s office downtown (although, truth be recognized, it might be around the corner). While presently teaming with excitement and blessed with incentives, the journey of IT into health care and health care research is invariably one of uncertainties and risks. Health information technology has been accused of provoking life-threatening medical errors, as well as – like previous technological breakthroughs along the history of Medicine, including the stethoscope – harming the patient-physician relationship and the quality of care. The editors of the New England Journal of Medicine early this year went as far as to state that data scientists are regarded by some clinical researchers as “research parasites.”

Moreover, the Federal Bureau of Intelligence has investigated that medical information can be sold on the black market for 10 times more than a credit card number, while at the same time cybersecurity experts are stunned by the extreme vulnerability of current U.S. health care facilities. This provides sensible ground for concern about patient privacy violation and identity theft once the health records have moved from papers into computers. Unlike a credit card, your medical and identity information cannot be cancelled over the phone and replaced by a new one. Patient matching, i.e. techniques for recognizing that data produced at separate sites refer to the same person, oftentimes confronts blunt opposition by civil opinion, while the ultimate ideal of a National Patient Identifier in the U.S. is explicitly prohibited by present legislation (HIPAA). Such seamless flow of interoperable health data between providers, however, is the very first recommendation expressed in 2012 by the Institute of Medicine for realizing the Learning Health Care System – one that revolves around the patient and where scientific discovery is a natural outgrowth of patient care.

With or without attaining the ideal of a Learning Health Care System, the U.S. health care system will undergo transformation sooner or later, by intervention or by itself, because the percentage of the GDP that is spent on health care can only continuously increase for so long. Information technology is at minimum a sensible “bet” for improving efficiency – however, the power of IT for improving efficiency lies not in greasing the wheels of existing paradigms, but in outclassing them with novel ones. This might be part of the explanation for the resistance against IT, although there does exist some evidence showing that IT can sometimes do more harm than good in health care, and here the word “harm” sometimes can mean patient harm. The cold truth is that, in spite of decades of scientific interest in using computers for health care, only very recently the health care industry became computerized, so we remain not far from the infancy of health care informatics. Nevertheless, Clinical Informatics has been unanimously approved in 2011 as a board-certified physician subspecialty by the American Board of Medical Specialties, signaling that the medical community sees in IT a permanent and complex duty for health care. Similarly, the NIH has in late 2013 appointed its first Associate Director for Data Science, also signaling that this novel field holds importance for health care research. Finally, there might be little that can be done with the entire -omics enterprise, with its thousands over thousands of measurements multiplied by millions of patients, that does not require data-scientific techniques.

The first cars were slower than horses, and today’s high-speed, road-only automobiles only became feasible after the country was dependably covered with a network of roads and freeways. Such a network was built not by the automobile producers, but by the government upon recognition that it would constitute a public good. The same principle could very well be the case of health care IT’s important issues with privacy, security and interoperability, with the added complication that it is easy for an EHR producer to design a solution but then block its users from having their system interact with software from competing companies. Now that health care records are electronic, we need the government to step in once again and build or coordinate the dependable freeways of health care data and IT standards, which will also constitute a public good and unlock fundamental potentials of the technology. Health care, on top of its humanitarian dimension, is fundamentally intensive in data and information, so it is reasonable to conjecture that information technology can be important, even revolutionizing, for health care. It took one hundred years for Einstein’s gravitational waves to evolve from a conjecture based on theoretical fundaments to a fact demonstrated by experiments. Perhaps in the future – let us hope not a century from today! – some of the data-scientific methods such as Artificial Neural Networks, Support Vector Machines, Naïve Bayes classifiers, Decision Trees, among others, in the hands of the millennials will withstand the trial of time and earn an entry at the standard jargon of medical research. Just like how, in their generations, meta-analyses, case-control studies, Kaplan-Meier curves, propensity scores, and the big grandpa of controlled randomized trial were similarly accepted.

Written by sciencepolicyforall

July 13, 2016 at 11:15 am

Science Policy Around the Web – April 17, 2015

leave a comment »

By: Cheryl Jacobs Smith, Ph.D.

photo credit: MJ/TR (´・ω・) via photo pin cc

Genomics in Medicine

Personalizing Cancer Treatment With Genetic Tests Can Be Tricky

Since the New Year, President Obama, backed by National Institutes of Health Director, Dr. Francis Collins, has rejuvenated an initiative to use the human genome to make more informed medical decisions in health care. Since the completed endeavor to sequence the human genome was published in 2001, scientists and physicians have used this information to better understand the underlying complexities of human behavior, health, and disease. As a consequence, many areas in medicine use human genetic information as a diagnostic to guide treatment regimens.

More and more oncologists, or cancer doctors, are relying on genetic tests of a patients’ tumor to help guide cancer treatment. However, given the complexity of our genome coupled with our limited understanding of the millions of A, T, C, and G’s encoding our genetic information, oftentimes much of the information generated from genetic tests can be ambiguous. Researchers writing in Science Translational Medicine say there is a way to make these tests more meaningful.

One of the main issues with genetic testing of tumors is that they harbor mutations and it is unclear which mutation is the key to killing the cancer cell, thus, making a therapeutic decision difficult. In this regard, the researchers suggest not only conducting genetic tests on the cancer of the patient, but also conducting genetic tests on healthy, normal tissue of the patient. In this way, physicians and researchers can detect cancer-specific mutations as these mutations would only be present in the cancer, but not the normal, healthy tissue.

This is not to say that current genetic tests conducted on cancer are not trustworthy. They, indeed, are quite reliable at identifying mutations that are clearly linked to certain cancers. This group asserts that in those cases where this approach does not work, that additional sequencing of the normal, healthy tissue as a means of comparison may help improve the diagnostic quality of those tumors that produce ambiguous results. The future of cancer diagnostics is a booming, changing, field and much is to remain to be seen in regards to consistency of tactic used. (Richard Harris, NPR)

Federal Research Funding

Controversy awaits as House Republicans roll out long-awaited bill to revamp U.S. research policy

The America Creating Opportunities to Meaningfully Promote Excellence in Technology, Education, and Science Act of 2007, or America COMPETES Act, was signed by President Bush in 2007 and it became law on August 9, 2007. The COMPETES Act sets funding targets for select physical science agencies: the National Science Foundation (NSF), the National Institute of Standards and Technology (NIST), and two offices with the Department of Energy (DOE): the Office of Science, and the Advanced Research Projects Agency-Energy, or ARPA-E.

Authored by the panel’s chair, Representative Lamar Smith (R–TX), there are provisions to the reauthorization act that scientists are likely to find interesting.

  • NSF spending: The bill would authorize $126 million less than President Obama requested but $253 more than NSF’s current budget. It relocates NSF’s budget to the natural sciences and engineering at the expense of the geosciences and the social and behavioral sciences. To add injury to insult, additional cuts from the geosciences and the social and behavioral sciences are expected.
  • DOE R&D: At least in 2016, the bill funds most Office of Science programs but the budget remains flat in 2017. Cuts will occur in the more applied renewable energy programs and new energy technologies. Interestingly, funding will boost in the areas on fossil and nuclear energy.
  • Peer review: Since Smith became chair in 2013, this has been a major area of debate regarding how NSF reviews the 50,000 or so requests for funding it receives from scientists every year. Apparently Smith and the NSF Director, France Córdova, have agreed upon legislation that will not “[…]alter[ing] the Foundation’s intellectual merit or broader impacts criteria for evaluating grant applications.”
  • NSF’s portfolio: This section of the bill gives NSF the responsibility “to evaluate scientific research programs undertaken by [other] agencies of the federal government.” This language apparently wants NSF to judge other research agencies about how they are facilitating their research programs. This is quite an awkward and broad demand. It still remains to be seen how this will play out.
  • Large new facilities: This section of the bill tries to rein in “wasteful spending” by requiring the NSF to correct any problems identified by an independent audit on a project’s expected cost before starting construction. However, the bill also restricts spending from contingency funds “[…] to those occurrences that are foreseeable with certainty … and supported by verifiable cost data.” This is interesting language given the need of a contingency fund is to fund unexpected occurrences.
  • Administrative burden: This part of the bill supports reducing administrative oversight in the form of government oversight and regulations. The bill argues that administrative costs are high and costly and these monies could be used to fund research. Instead, the bill will have the White House science advisor convene an inter-agency panel.
  • NIST: The bill increases NIST’s budget; however, falls short of President Obama’s request.

The good news is that the COMPETES bill has finally been reauthorized. However, controversy awaits as to the effectiveness of the reauthorized bill. (Jeffrey Mervis and David Malakoff, ScienceInsider)

Climate Policy

Climate change: Embed the social sciences in climate policy

The Intergovernmental Panel on Climate Change (IPCC) needs to broaden its perspective by adding more social scientists. The organization is akin to a moth to a flame— focusing attention on a well-lit pool of the brightest climate science. But the insights that matter are not readily viewed and are far from the bright light of the debate. The IPCC has involved only a narrow slice of social-sciences disciplines: economics. The other social sciences were mostly absent. Bringing the broader social sciences into the IPCC may prove challenging, but it is achievable if they adapt a strategy that reflects how the fields are organized and which policy-relevant questions these disciplines know well. The IPCC has proved to be important. But presently, it is too narrow and must not monopolize climate assessment. In the future, reforming the organization will benefit the conversation surrounding climate change greatly and move contentious work into other forums. (David Victor, Nature)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

April 17, 2015 at 9:00 am

Science Policy Around the Web – December 7, 2012

leave a comment »

photo credit: Christopher Chan via photopin cc

photo credit: Christopher Chan via photopin cc

By: Jennifer Plank

Our weekly linkpost, bringing you interesting and informative links on science policy issues buzzing about the internet.

Supreme Court to Decide if Gene Patents are Legal
On November 30, the Supreme Court announced that it would hear the case of ACLU vs. Myriad Labs regarding patenting human genes. Myriad Labs patented the BRCA gene, therefore, they are the only company allowed perform medical research on the BRCA genes. Because no other companies can test for the BRCA1 or BRCA2 mutations, the testing remains very expensive for patients (approximately $5000, insurance companies pay a percentage of the test). The ACLU contends that naturally occurring genes should not be able to be patented. Myriad Labs disagrees by stating that without patents, it is not financially viable to conduct medical research. The Supreme Court will hear the case in early spring 2013. (Lynda Altman)

Genome Sequencing For Babies Brings Knowledge and Conflict – Whole genome sequencing can be used to decipher an individual’s genetic code and to screen for thousands of conditions that may impact the individual later in life. As the technology improves and becomes more common, whole genome testing will become more affordable for patients. Additionally, whole genome sequencing can be used to diagnose babies at birth. However, this use of the technology raises many questions regarding the handling of the results. For example, many adults who have had their genomes sequenced have decided to not receive results relating to the risk of incurable diseases such as Huntington’s or Alzheimer’s disease while newborn babies are unable to voice their desire to know the results. This article outlines many of the ethical issues facing whole genome sequencing for babies. (Rob Stein)

Research Grants: Conform and be Funded – Between 2002 and 2011, the National Institutes of Health (NIH) funded over 460,000 research grants, and the labs supported by these grants have produced numerous medical advances. However, it is unclear if the most influential researchers are funded by NIH grants. A survey of publications since 2001 suggests that approximately 60 percent of the most influential scientists (those who published papers that have been cited more than 1000 times) do not have NIH funding. This finding suggests that the NIH is not meeting their goal to fund the “best science by the best scientists.” (Joshua Nicholson and John Ioannidis)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

December 7, 2012 at 11:55 am

Science Policy Around the Web – October 5, 2012

leave a comment »

By: Rebecca Cerio

Our weekly linkpost, bringing you interesting and informative links on science policy issues buzzing about the internet.

President’s Bioethics Commission Releases Report on Genomics and Privacy – Whole genome sequencing (sequencing of a person’s entire genome) is swiftly becoming more and more affordable and opens up tremendous opportunity to advance medical knowledge and give people a new grip on their own health.  However, there have been lingering doubts about how such intimate knowledge will be protected, collected, and used.  New guidance about issues of privacy, regulation, and public good has been released by the Presidential Commission for the Study of Bioethical Issues.  You can get the whole report here.

Learn to Read a Scientific Report – This post on Wired.com is tiny and likely overlooked, but it made my day.  Quick, easy tips that hit upon some important ways for the public to evaluate scientific information (and advertisements) that come their way.  (by Noah Gray)

Doctors just say ‘no’ to drug company studies – Drug companies routinely fund, produce, publish, and advertise studies investigating the efficacy of their products.  One audience is the general public, but a larger audience is doctors.  Do doctors take into account possible drug company bias when evaluating new drugs?  Yes, they do, and they don’t like it, says a new study from investigators at the University of Arizona. (by Jennifer Fitzenberger via Futurity.org)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

October 11, 2012 at 4:59 pm

The Potential and Pitfalls of Direct to Consumer Genetic Tests

leave a comment »

photo credit: Alfred Hermida via photo pin cc

By:  Danielle Daee

At the genomic level approximately 0.1% of our genomic sequence differs from person to person.  Some of these subtle genetic variations have important physiological consequences, which are reflected in our risk for developing diseases and our overall health.  In recent years genome-wide association studies have sifted through individuals’ entire genomic sequences to try and identify genetic variants that are associated with increased disease risk.  While these studies are highly informative, they often lack the functional studies required to close the gap between correlation and causation.  After all, without knowing what the changes in a particular gene do, there is no way to understand whether that change is actually the cause of a particular disease.

Despite the caveats to interpreting correlative association studies, several biotech companies have developed direct to consumer (DTC) genomic tests to help consumers identify their personal risk for various diseases.  These tests present a variety of public health policy concerns.  Foremost is whether or not companies are overstating the usefulness and understating the caveats of genetic information to their consumers.  Furthermore, are consumers adequately equipped to interpret the results of genomic tests without a trained professional?

In May 2010, Pathway Genomics announced a plan to offer their genetic testing services at Walgreens pharmacies.  This plan marked the transition of DTC genetic testing sales from a less-accessible, internet commerce model to an over-the-counter sales model that would dramatically increase accessibility.  This increased reach sparked a firestorm of public concern and triggered an investigation by the General Accounting Office, a Congressional Committee hearing, and a Federal Drug Administration (FDA) panel discussion to determine how regulation of DTC tests should proceed. Read the rest of this entry »

Written by danidaee

June 5, 2012 at 11:53 am

Informed Consent in the Genomics Era

leave a comment »

photo credit: Alfred Hermida via photo pin cc

By:  Katia Garcia-Crespo

Rapid progress in genomics research has been fueled by new technologies allowing rapid, low cost whole genome sequencing and has brought the promise of personalized medicine closer than ever before. However, in order for genomic data to be useful it must be linked to a donor’s clinical samples and medical history. Biobanks have become the instrument to achieve this.

Biobanks are defined as “repositories of human biological material and associated data stored for research purposes”; they are found on every continent and have grown substantially in recent years (1). As biobanks expand their archiving of biological materials related to genetic studies, ethical issues concerning the use of these materials will also increase. Although initial consent is required for storing of biological samples for research purposes, biobanks’ usefulness resides in their ability to provide materials to multiple researchers in different studies not specified on initial consent forms. Current regulations don’t provide clear guidance on obtaining informed consent for future research uses such as these.

Two recent court cases highlight the importance of obtaining appropriate consent for the use of stored biological samples. In the first case, 5 families sued the state of Texas over the use of dried blood-spot samples that had been collected for newborn genetic test screening but which had been later used in research. The families claimed that no consent had been obtained for indefinite storage and undisclosed research use. The case was settled out of court and about 5 million stored samples were destroyed (2). In response, the state of Texas passed legislation allowing for the storage of such samples, provided that parents could opt out.

In the second case, Arizona State University agreed to pay $700,000 and return blood samples to members of the Havasupai Indian tribe. Tribe members believed that they had given consent for their samples to be used in diabetes research. They signed broad consent forms, but initial communications with the tribe members only talked about diabetes. The Havasupai DNA was later used in schizophrenia, inbreeding, and evolutionary studies to which the tribe objected. (3).

In both of these cases the affected parties claimed that researchers had not been clear about what they intended to do with the samples collected. In other words, though consent was sought, it was not fully informed consent. It is clear that with greater transparency these cases could have been avoided, but what constitutes adequate informed consent is still a matter of intense debate. Read the rest of this entry »

Written by sciencepolicyforall

April 30, 2012 at 3:32 pm