Science Policy For All

Because science policy affects everyone.

Posts Tagged ‘big data

Science Policy Around the Web – April 7, 2017

leave a comment »

By: Kseniya Golovnina, PhD

Cancer Research

RNA-Seq Technology for Oncotargets Discovery

One of the most significant discoveries in cancer research, using the “Big Data” approach with experimental validations, was made recently by Chinese and American scientists together with They described the first cancer predisposition, familially-inherited, fusion gene, KANSARL, specific to populations with European ancestry, by using advanced RNA-sequencing (RNA-seq) of cancer transcriptomes.

A fusion gene is a hybrid formed from two previously separate genes as a result of chromosomal rearrangements. Often, fusion genes are oncogenes. The first fusion gene abnormality was described in a human malignancy and was called the Philadelphia chromosome. In the early 1980s, scientists showed that a translocation between chromosomes 9 and 22 led to the formation of a fusion gene (BCR/ABL1), which produced a chimeric protein with the capacity to induce chronic myeloid leukemia. KANSARL is the most prevalent cancer gene discovered so far. Scientists systematically analyzed the RNA-seq data of many cancer types from different parts of the world, together with RNA-seq datasets of the 1000 Genome Project. KANSARL fusion transcripts were rarely detected in tumor samples of patients from Asia or Africa, but occurred specifically in 28.9% of the population of European origin.

Scientists from Cancer Genome Anatomy project at the National Cancer Institute (NCI), using sophisticated sequencing techniques, have identified 10,676 gene fusions among cancer-related chromosomal aberrations. has identified over 1.1 million novel fusion transcripts, many of which are likely biomarkers of diseases. Fusion genes play an important role in diagnosis and monitoring of cancer treatment progress by measuring the disappearance of the fusion and, thereby, the disappearance of the tumor tissue. Currently, several clinical trials are aimed at treating fusion-positive patients with a range of targeted therapies, which will hopefully lead to novel therapy development and save patients’ lives. (Splicingcodes)


Turning Mammalian Cells into Biocomputers to Treat Human Disease

Engineering cells by manipulating DNA and controlling their performance is a growing field of synthetic biology. Scientists have been working with bacterial cells for years to perform different controlled actions, for example, lighting up when oxygen levels drops. Bacterial cells, including Escherichia coli, have a simple genome structure and are relatively easy to manipulate. Using bacterial cells, it was possible also to join several genetic circuits within a single cell to carry out more complex actions.

After successful engineering in bacteria, researchers have aimed to create genetic circuitry to detect and treat human disease in mammalian cells. Most of the attempts have failed due to the complexity of the mammalian genome, until a group of biomedical engineers from Boston and Basel, Switzerland decided to upgrade their DNA “switches”. They used an ability of special enzymes, DNA recombinases, to selectively cut and stitch DNA. The new system in mammalian cells is called ‘Boolean logic and arithmetic through DNA excision’ (BLADE). BLADE founders built a wide variety of circuits (113), each designed to carry out a different logical operation with 96.5% success. This Boolean system has great potential for applications in cell and tissue engineering. One exciting possibility is engineering T-cells with genetic circuits that initiate a suicide response to kill tumors when they detect the presence of two or three “biomarkers” produced by cancer cells. (Robert F. Service, ScienceNews)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

April 7, 2017 at 9:22 am

Science Policy Around the Web – September 9, 2016

leave a comment »

By: Thaddeus Davenport, PhD

Source: pixabay


DNA Data Storage

In a recent Nature News article, Andy Extance described the growing need for novel data storage methods and materials. It is estimated that between 2013 and 2020 there will be a tenfold increase in digital information, requiring 44 trillion gigabytes of storage. This is a number that is difficult to comprehend, but it’s magnitude and the rapid rate of digital data growth are put in context by a second, more shocking, estimate: if the expansion of digital information continues at the forecasted rates the amount of data requiring storage in 2040 will require “10 to 100 times the expected supply of microchip-grade silicon.” For this reason, researchers have begun considering alternative data storage materials including DNA, which is able to store information at an impressive density; it is estimated that 1 kg of DNA would be sufficient to store the world’s digital archives. DNA is also stable – while there is data loss from hard disks after less than ten years of storage, Nick Goldman, a researcher pioneering DNA data storage at the European Bioinformatics Institute (EBI), notes that in 2013, researchers successfully read the genome of a horse that had been trapped in permafrost for 700,000 years. But there are a number of hurdles that must be overcome before we are able to stream our favorite show out of a test tube. These hurdles include: 1) it is slow to read and (especially) to write DNA sequences, 2) DNA synthesis is error prone, 3) DNA synthesis is currently expensive and 4) it is difficult to specifically access desired information stored within DNA. There have been exciting advances over the last few years from researchers at EBI, Harvard, the University of Washington, and Microsoft that begin to address these problems. This year, researchers at Microsoft and the University of Washington reported successfully storing and retrieving 200 megabytes of data in DNA. This is a far throw from the 44 trillion gigabytes of storage we will require in 2020, but progress in science is non-linear and the need for alternative storage media will motivate the growth of this exciting field. (Andy Extance, Nature News)


Oklahoma Shuts Down Wastewater Injection Wells Following Earthquake

There is a significant amount of wastewater that is released in the process of extracting oil and gas from traditional and hydraulic fracturing (“fracking”) wells. One way to dispose of this wastewater is to inject it deep into the earth’s crust. As oil production has increased within the continental United States within the last few years, wastewater injection has increased in stride. Recent evidence suggests that wastewater injection into rock formations alters pre-existing stresses within faults, in some cases leading to slippage that results in an earthquake. A recent article by Niraj Chokshi and Henry Fountain for the New York Times reported that on September 3rd, Oklahoma experienced a 5.6-magnitude earthquake – tying the state’s previous record for its most severe earthquake set in 2011. In response, Oklahoma government officials ordered the shutdown of three dozen wastewater injection wells in the area most affected by the earthquake. The quake comes amid an impressive increase in earthquake frequency for the state. In 2009, there were only three earthquakes of magnitude 3 or greater, but in 2015, this number increased to over 900. To address this increase, state officials ordered a reduction in wastewater injection last year with the hope of decreasing earthquake activity. To date in 2016 there have been over 400 earthquakes of magnitude 3 or greater in Oklahoma. While it is widely accepted that oil and gas production and the associated wastewater injection have set off a number of earthquakes in Oklahoma and other states, it remains unclear if last Saturday’s earthquake was the result of this activity. In the future, additional monitoring of injection wells will provide valuable data to inform decisions on the placement and operation of wastewater injection wells. (Niraj Chokshi and Henry Fountain, New York Times)


Early Support for Amyloid Plaques as the Causative Agent of Alzheimer’s Disease

As humans are living longer, Alzheimer’s disease is becoming an increasingly significant public health problem. The prevailing hypothesis is that aggregation of proteins such as amyloid-β (Aβ) into larger “plaques” leads to Alzheimer’s disease, but there is still no direct evidence to demonstrate that Aβ plaques cause Alzheimer’s disease. In a Nature News & Views article this week, Eric M. Reiman, summarizes the results of an article published in the same journal, which showed that a human antibody, called aducanumab, was able to reduce Aβ plaques in a dose-dependent manner in a small, 12-month placebo-controlled human trial. Though other Aβ-targeting therapies have successfully reduced Aβ aggregates, the most tantalizing result of this study comes from early exploratory analysis of the trial data, which suggested – based on a study population that is too small to make definitive conclusions – that higher doses of aducanumab and larger reductions in Aβ plaques were associated with slower cognitive decline. Before accepting the hypothesis that Aβ plaques cause Alzheimer’s disease, it will be critical to repeat the experiment in larger clinical trials appropriately powered to measure the impact of antibody treatment and plaque reduction on cognitive decline. The study authors also noticed that high doses of antibody were sometimes associated with the inflammation within the brain, causing them to limit the maximum antibody dose tested. Overall, these are exciting results, which, if confirmed in larger clinical trials, would provide much-needed clarity about the mechanism of Alzheimer’s disease and inform future treatments. (Eric M. Reiman, Nature News & Views)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

September 9, 2016 at 9:20 am

Science Policy Around the Web – June 21, 2016

leave a comment »

By: Fabrício Kury, MD

Photo source:

Personalized Medicine Costs

The Paradox of Precision Medicine

Precision medicine has been hailed by President Obama as a multi-hundred-million “moonshot” meant to revolutionize medicine in a way never seen before. Its rationale derives from the recent field of research called Genome-Wide Association Studies (GWAS), which seeks to discover, in large and accelerated scale, the genetic basis of disease, novel targets for drugs, and what treatments work for which patients and at what moments and doses. This very rationale, however, can be self-limiting in a capitalist market where economics of scale is required to provide patients with access to otherwise prohibitively expensive treatments. In this lucid review, Janeen Interlandi from Scientific American demonstrates that old-fashioned, non-personalized treatments have recently been demonstrated not only be tremendously cheaper than “bespoke” drugs, but also just as clinically effective. (Janeen Interlandi, Scientific American)

Research Ethics

Scientists Are Just as Confused About the Ethics of Big-Data Research as You

Dubbed “the fourth paradigm” of science (book available for free download here), big data research poses novel ethical questions that might not be appropriately addressable by the current paradigm of ethics centered on the Common Rule and oversight by Institutional Review Boards (IRBs). A study can be ruled exempt from IRB approval if it only utilizes publicly available data – but what is it “publicly available,” exactly? In this article, Sarah Zhang from Wired magazine reviews recent cases of controversy in utilization of large datasets for studies, such as the Facebook Emotion Experiment, and suggests that IRBs might need new sets of skills to safeguard human subjects in the evolving landscape of research. (Sarah Zhang, Wired)

Data Science

The Doctor Who Wants You to Be a Research Parasite

After the editor-in-chief of the New England Journal of Medicine published in January, 2016 a stingy editorial affirming that some clinical researchers regard data scientists as “research parasites,” a wave of controversy exploded and culminated with personalities such as U.S. Chief Data Scientist DJ Patil and National Academy of Sciences President Marcia McNutt publicly using the hashtag #IAmAResearchParasite in defiance. In this article, Taylor Mayol from Ozy introduces Dr. Atul Butte, recently-appointed head of Clinical Informatics at the University of California, who sustains a bold call for more “research parasites” in health care, while additionally characterizing lack of entrepreneurship among academics as “a tragedy” because it is “the right way to truly change the world, by going beyond writing papers.” (Taylor Mayol, Ozy)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

June 21, 2016 at 9:00 am

Science Policy Around the Web – April 6, 2016

leave a comment »

By: Sterling Payne, B.Sc.

Artificial Intelligence

To Beat Go Champion, Google’s Program Needed a Human Army

“It may be a hundred years before a computer beats humans at Go — maybe even longer,” Dr. Piet Hut communicated to New York Times’ George Johnson in a 1997 conversation. The event prompting their discussion was the victory of IBM’s Deep Blue over grandmaster Garry Kasparov in a series of chess games. Dr. Hut’s prediction was bested by about 80 years by AlphaGo, the product of Google’s DeepMind. AlphaGo recently secured a victory against 9 dan Go champion Lee Sedol in a 5-game match hosted by Google. By nature, Go as a game is more complex than chess; less stringent gameplay guidelines don’t offer a surefire way to determine which player is at an advantage. Rather than powering through an analysis of thousands upon thousands of potential moves each turn, AlphaGo utilizes a novel combination of machine-learning methods to determine which board configurations are more advantageous, and positively reinforces correct decisions via thousands of matches played against itself. The product of this is an artificial intelligence (AI) that more closely represents human intuition, at least in the small scope of the Chinese board game.

With its 4-1 victory over Sedol, AlphaGo demonstrated extreme proficiency in the game of Go, but in only that. While inarguably an astounding accomplishment and significant leap in the field of computer science, AIs like AlphaGo have a long way to go before they can replicate the intuition of the human mind, which is far expandable beyond an ancient board game. In terms of policy, the very methods used to create AlphaGo could also find their ways into hospitals and healthcare facilities in the near future. With the advent of artificial intelligence in the workplace, extra considerations will have to be taken by patients and care providers alike in terms of personal information, data management, and general communication. (George Johnson, The New York Times) (Will Knight, MIT Technology Review)

Federal Cancer Research

Blue Ribbon Panel Announced to Help Guide Vice President Biden’s National Cancer Moonshot

The Cancer Moonshot Initiative , headed by Vice President Joe Biden, plans to put an end to the disease that has plagued millions of humans for hundreds of years. Armed with a $1 billion budget over the next five years, the initiative’s primary aim is to speed up cancer research such that a decade’s worth of discoveries can occur in half that time. Two of the main areas where such discoveries will fall are detection and treatment. A task force to handle financial matters and progression of the initiative was announced in February, and just yesterday (April 04, 2016), the National Cancer Institute unveiled their Blue Ribbon Panel, a special selection of various leaders in the fields of cancer research and patient advocacy, to direct efforts of the initiative to where they are likely to make the largest impact.

As a society, our knowledge of cancer has grown considerably since the turn of the century; Cancer is no longer thought of as a single disease that affects people, rather, it is the product of multiple genetic mutations and cellular microenvironments, painting a unique disease landscape for each person it affects. Members were chosen such that the panel represented multiple walks of science from immunology to bioinformatics, as well as cancer prevention and treatment. Already armed with capital and a team to guide finances and general progress, the Cancer Moonshot Initiative has taken another giant step forward with the addition of the Blue Ribbon Panel. The full member list of the Blue Ribbon Panel and the original announcement are linked here. (News Releases, National Institutes of Health)


Biology software promises easier way to program living cells

With computer programming, the programmer gives the computer a set of instructions in one (or more) of several different programming languages. These instructions include logical operations such as true-false statements (i.e. “if this is true, then do this”) and various loops (i.e. “while this is true, do this”). At the end of all of this, sits a program, executed by the computer to provide some sort of output, whether it be ordering a data set, turning on a light, or spinning a motor. Dr. Christopher Voigt and his lab at MIT have taken these principles and applied them to their new software Cello, a programming language capable of producing working circuits in living systems. Cello requires the user to input commands, such as a function they would like a given cell to perform and under what conditions it should perform said function. After the input is compiled, the end result is a DNA sequence or “circuit” that, when placed inside a cell, can fulfill the function(s) specified by the user. In a paper recently published in Science (April 01, 2016), Alec Nielsen and colleagues used cell to generate 60 different DNA circuits, 75% of which worked as expected the first time when introduced into Escherichia coli cells.

As synthetic biology continues to grow and gain popularity throughout the research world, it is of increasing importance to think about what policies and potential restrictions should be set in place. Engineering de novo biological systems and functions can be extremely powerful, yet, if left in the wrong hands, could have significant consequences as with any equally commensurate technique (e.g. CRISPR-Cas9). (Erika Check Hayden, Nature News)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

April 6, 2016 at 12:00 pm

Science Policy Around the Web – January 12, 2016

leave a comment »

By: Tad Davenport, Ph.D.

Photo source:

Biomedical Resources

Funding for key data resources in jeopardy

The goal of science is to push boundaries of knowledge. In a letter to John Locke, Isaac Newton famously wrote about his own discoveries, saying: “If I have seen further, it is only by standing on the shoulders of giants.” The pace of scientific discovery is accelerating, and the findings accumulate rapidly – this means that the giants whose shoulders we stand on today have never been quite so gigantic nor have they ever grown so quickly.

Increasingly, biomedical researchers rely on curated databases such as UniProt, OMIM, FlyBase and others to rapidly sort through enormous (and rapidly growing) volumes of information. These databases provide digestible, searchable access to descriptions of protein function and interactions, post-translational modifications, mutations associated with disease, and changes in protein and RNA levels during the development of model organisms including fruitflies and zebrafish. These databases are essential for generating hypotheses and designing experiments to understand basic biology and disease mechanisms.

A recent report by Jocelyn Kaiser for Science magazine describes the fiscal challenges faced by the National Human Genome Research Institute (NHGRI) of the National Institutes of Health (NIH) in supporting the maintenance of these databases.  It is estimated that over $110 million or the NIH’s $30 billion annual budget is spent on maintaining these databases, and the cost is likely to continue growing in parallel with rapid expansion of genomic and other data.  To address the long-term “sustainability problem”, leaders of NHGRI have initiated discussions on alternative funding sources and mechanisms, including subscriptions and use-based fees. The enormous value of these databases for scientific progress is difficult to estimate, and every effort should be made to ensure easy accessibility for all researchers. (Jocelyn Kaiser, Science)

Vaccine Research

Unfilled Vials – Accelerating and Prioritizing Vaccine Development

Vaccines are a highly cost-effective means of preventing transmission of infectious agents. Unfortunately, vaccine development does not always make fiscal sense for pharmaceutical companies. In order to encourage pharmaceutical companies to take on the substantial financial burden of developing and testing vaccines in human clinical trials, it is likely that public-private partnerships, designed to mitigate the financial risk to companies, will play a critical role.

In a recent Science magazine article, Jon Cohen describes some recently proposed mechanisms for igniting private interest in developing and testing vaccines for pathogens that do not typically impact wealthy nations. One important step toward this goal is generating consensus regarding which pathogens should be prioritized for vaccine development.

Based on a poll of twelve vaccine experts, Science magazine generated its own list of the top ten pathogens that should be prioritized in designing new vaccines. Number one on the list was Ebola Sudan, a pathogen known and feared by many in the United States and other wealthy countries. However, a number of the pathogens on this list are less well-known in the United States (but no less important), including Chikungunya, Schistosoma, and Hookworm.  In ranking the pathogens, the contributors considered the pathogen’s impact on human populations, its transmissibility, its “potential to cause economic and social chaos”, and importantly, the feasibility of developing an effective vaccine (based on the immunity generated by natural infection, or preliminary results from tests in animal models). Cohen’s article enlightens the reader by presenting a balanced review of the challenges of vaccine development and a rational mechanism by which much-needed vaccines might be brought to market. (Jon Cohen, Science News)

The Future of Science

Interviews: Big ideas for better science

In a 2015 year-end interview with Kendall Powell of Nature magazine, four notable scientists made recommendations for how to improve the practice and culture of scientific research.

Jin-Soo Kim from Seoul National University suggests that eliminating the one-directional anonymity of the peer review process and openly crediting reviewers would reduce the potential conflict of interest in which a competing scientist is asked to review a colleague’s paper.

Jean-Baptiste Mouret at the French Institute for Research in Computer Science and Automation, recommends improving the openness and accessibility of computer programs used in research, with numerous potential benefits including better reproducibility and faster scientific progress.

Maria Cristina De Sanctis at the Institute for Space Astrophysics and Planetology in Rome emphasizes the importance of encouraging women in science from the very earliest ages.

And Danielle Edwards from the University of California, Merced recommends instilling more humanity in scientific research by providing a safer, more understanding work environment for people with varied experiences of life and its associated challenges.

These can be thought of as “New Year’s Resolutions” for science. How do you resolve to improve science, and more broadly, the world, this year? (Kendall Powell, Nature)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

January 12, 2016 at 9:00 am

Science Policy Around the Web – October 31, 2015

leave a comment »

By: Courtney Pinard, Ph.D.

Photo credit: Novartis AG via photo pin cc


How Prevalent is Scientific Bias?

Scientists and clinicians conducting clinical trials must abide by rigorous standards to safeguard against biases. Biomedical animal research has not been held to the same standards, and advocates of robust science have argued that this lack of scientific rigor is why more than half of pre-clinical studies are irreproducible. A recent study published at the University of Edinburgh in the U.K. shows that animal researchers are not using the same standards to prevent bias in study design. Such standards include 1) using randomized trials to prevent scientists from, for example, assigning unhealthy animals to the control group to boost a drug’s effect on the treatment group; 2) ensuring that researchers are blinded when accessing outcomes of an experiment; 3) calculating the correct sample size before starting an experiment; and 4) disclosing any conflicts of interest. The authors of the study examined 2500 papers published between 1992 and 2011 on drug efficacy, and the results were dismal. Only 30% of papers analyzed outcome in a blinded manner, 25% stated randomizing animals to groups, 12% included a conflict of interest statement, and less than 1% of papers reported calculating the needed sample size in advance. When the authors looked at whether institute quality or journal impact factor predicted bias, they found no correlation. The U.K. study is one out of many studies on the topic of scientific rigor that have resulted in growing concern from scientists and the public about irreproducible results in pre-clinical biomedical research.

According to an NIH commentary published last year, the reasons for why scientific bias in animal research is so prevalent are complex and have to do with the attitudes of funding agencies, academic centers, and scientific publishers. Authors of the commentary, Francis Collins and Lawrence Tabak, discuss these attitudes: “Funding agencies often uncritically encourage the overvaluation of research published in high-profile journals. Some academic [centers] also provide incentives for publications in such journals, including promotion and tenure, and in extreme circumstances, cash rewards.”

Given the continuing budget restraints, and Congress’ awareness about the reproducibility problem, national funding agencies have started to act. The NIH, for example, organized a workshop with over 30 basic/preclinical science journal editors to put together principles and guidelines to enhance research rigor and reproducibility. One such principle is “Transparency in Reporting”, and includes the bias safeguarding standards described above. Strengthening pre-clinical biomedical research will only occur when scientists and policy makers at funding agencies, academic institutions, and journals work together to put these principles into practice, and acknowledge that the “publish or perish” attitude rampant in the scientific culture needs to change. The situation and solution was described succinctly in a recent Nature Editorial on cognitive bias: “Finding the best ways to keep scientists from fooling themselves has so far been mainly an art form and an ideal. The time has come to make it a science.” (Martin Enserink, ScienceInsider)

Big Data

Proposed Study to Track 10,000 New Yorkers

A new proposed longitudinal study will attempt to monitor thousands of households in New York City over the span of decades. Information will be gathered in intimate detail about how people in these households lead their lives, including information about diet, exercise, social activities and interactions, purchases, education, health measures, and genetics. This ambitious project is called the Kavli Human Understanding through Measurement and Analysis (HUMAN) project, and aims to quantify the human condition using rigorous science and big data approaches to understand what makes us well and what makes us ill. According to project leaders, existing large-scale data sets have only provided detailed catalogs of narrow aspects of human health and behavior, such as cardiovascular health, financial decision-making, or genetic sequencing. By measuring the feedback mechanisms between biology, behavior, and our environment over decades, researchers believe that that much more will be understood about how these factors interact to determine human health over the life cycle. For example, according to articles written by scientists in support of the project, the new data could measure the impact of cognitive decline on performing activities of daily living, on family members and caregivers, and on healthcare utilization or end-of-life decisions. A further goal of the project is to provide data to policy makers in order for them to develop evidenced-based public policies.

Anticipating privacy and cybersecurity concerns inherent in such an invasive study, Kavli HUMAN project researchers have established a Privacy & Security Advisory Council, comprised of members in the private, public, and academic sector. The Advisory Council includes bioethicists and patient privacy advocates. In addition to establishing the Advisory Council, project leaders conducted an opinion survey of diverse group of Americans asking whether they 1) think the study should be done, and 2) if they would be willing to participate. The results of the survey suggested that nearly 80% think that the study should be done and more than half were willing to participate. When questions arise about the ethics of collecting such information, Kavli HUMAN project researchers publicly argue that corporations already track Americans’ spending habits, location, and use of technology, and that “people’s data can be better used to serve them, their communities, and society.” (ScienceInsider, Kelly Servick)

Nutrition and Cancer

A Diet High in Red Meat and Processed Meat Increases Risk for Colorectal Cancer

The World Health Organization International Agency for Research (IARC) announced on Monday that eating too many processed meats are cancer-causing and eating too much red meat is “probably carcinogenic to humans.” Red meat is defined as all types of mammalian muscle meat, such as “beef, veal, pork, lamb, mutton, horse, and goat,” and processed meat is defined as meat that “has been transformed through salting, curing, fermentation, smoking, or other processes to enhance flavor or improve preservation.” The IARC reviewed 800 studies that looked at the association of cancer with consumption of red or processed meat in people around the world, of diverse ethnicities and diets. Results of this analysis revealed that the positive association between red and processed meat consumption and cancer was strongest for colorectal cancer. The Global Burden of Disease Project, an independent academic research organization, estimates that 34,000 cancer deaths per year worldwide are attributable to diets high in processed meat. Studies show that meat processing techniques and cooking this kind of meat at high temperatures can lead to the formation of carcinogenic chemicals, and that these compounds appear in parts of the digestive tract. Specifically, the agency said its experts concluded that each 50 gram portion of processed meat eaten daily increased the risk of colorectal cancer by 18 percent. Red meat was not as strongly associated with cancer as processed meat. Some public health experts criticized the bravado of the IARC announcement. In response to public inquiries, they have published a FAQ page where they state that smoking and asbestos are more likely to be causal for lung and other types of cancers. The announcement did not mark a new discovery, since the original report has been out for several years; it was meant to attract public attention and help countries looking to WHO for health advice. According to the director of IARC, “these findings further support current public-health recommendations to limit intake of meat.” (NPR; Anahad O’Connor, New York Times)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

October 31, 2015 at 9:00 am

Science Policy Around the Web – July 10, 2015

leave a comment »

By: Daniël P. Melters, Ph.D.

Picture source: Pictures of Big Data

Big, big data

Big Data: Astronomical or Genomical?

Computing power increases according to Moore’s law, which states that the number of transistors in central processing units (CPUs) double every two years or so. However, this exponential increase in computing power has been humbled by rapid growth in the amount data being generated by DNA sequencing, and was observed back in 2012. As a consequence, more and more computing resources are needed to handle and store genome data. A recent report in PLoS Biology has argued that genomic data production will exceed the storage resources of that needed to handle both Twitter and Youtube combined. They estimate that in 10 years time between 100,000 and 2 billion human genomes will be sequenced. To store all this data would require about 2-40 exabytes (or 2-40 million terabytes), including storing all sequencing errors and preliminary analysis.

The need to be able to store large quantities of genomic data is a realistic challenge, and a number of groups around the globe are working towards that goal. The authors of the PLoS Biology paper did not consider any major improvements in data compression technologies in their analysis but various groups (here, here and here) are in the process of developing such technologies. One group claims to be able to compress genomic data by ~9,500 fold, compared to the current National Center for Biotechnology Information (NCBI) standard used to store genomic data (.sra) 46.9 fold. Europe has set sight on a continent-wide computing cloud to facilitate data sharing between research groups in various scientific disciplines. As well, stemming from the January 20th announcement of the Precision Medicine Initiative (PMI), the National Institutes of Health (NIH) has started to draft how it will approach the vast amount of genomic data that PMI will require. Many angles are being worked on to handle the ever growing volume of genomic data and a coordinated effort might be beneficial to streamline the efforts towards this. (PLoS Biology Perspective)

Chemical regulations

New U.S. rules on helium sales said to stifle competition

Two years ago, Congress passed the Helium Stewardship Act designed to establish a competitive market for federal helium by switching from having the U.S. Bureau of Land Management (BLM) sell off helium at a fixed price to instead selling it at auction. A recent congressional panel showed the opposite effect of this new law. The number of companies buying from the reserve fell by 50% (from 8 to 4). Helium is a indispensable noble gas for many types of technology and scientific research, including cooling MRI scanners, industrial leak detection systems, and telescope lenses. As a byproduct of natural gas drilling, the U.S. accrued a vast reserve of helium from the 1960s onward. In a natural geological formation near Amarillo, Texax about 1 trillion liters of helium gas is stored. In 1996, U.S. Congress passed the Helium Privatization Act ordering BLM to sell the helium reserve. This law forced BLM to sell helium at a price that would recoup the amount the government had spent in accumulating the gas. This selling strategy led to wasteful use of helium, as a 2010 report from the National Academies’ National Research Council pointed out. Furthermore, BLM had a mandate that lasted up to the point when all the $1.3 billion was recouped. That would have happened in September 2013, leaving about 370 billion liters of helium stored away. This led Congress to pass the Helium Stewardship Act. Of the 12 companies that sell refined helium, only four refine the helium themselves and these four companies used the auction to limit their competitors access to federal reserves of helium by being willing to pay premium (+52% compared to non-auction prices) for the gas. At the end of the day, the various companies and researchers are the ones that suffer most as some people pay as much as $40 per liter of liquid helium. The helium reserve is expected to last another six years. The next BLM auction is scheduled for August 18th. (Adrian Cho, ScienceInsider)

Antibiotic resistance

Bacteria-Eating Viruses Could Be New Ally Against Superbugs In A ‘Post-Antibiotic Era’

Bacteriophages are viruses that are capable of infecting and killing bacteria in a host-specific manner. While it is dangerous for humans when a virus jumps the species-barrier – as was the case with the swine flu (H1N1 pandemic) and SARS (whose origin is thought be to be the masked palm civet) – it is extremely rare for viruses cross kingdom-barriers, as would be required for bacteriophages to infect humans. In light of the increasing reports of antibiotic resistant bacteria, the World Health Organization has classified this threat as a growing global health problem. An alternative approach to tackling antibiotic resistance would be combat bacteria with its natural enemies: bacteriophages. After all, they are been extensively used and studied in biomedical science. A few European countries already use bacteriophages to combat bacterial infections, but they are not authorized as a medicinal product. Physicians would like to see regulatory agencies allow them to use phages on a single-patient basis, whereas pharmaceutical companies’ ultimate aim is to achieve marketing authorization in the European Union. The European Medicines Agency (EMA) held a workshop on the therapeutic use of bacteriophages on June 8th. The EMA emphasized that the efficacy and safety of phages in a clinical setting need to be shown first before any further recommendations for approval can be given, similar to established treatment methods. This is not the case for bacteriophage therapy at the moment for which very few randomized controlled clinical trials have been conducted to date. Nevertheless, the EMA is looking forward to gathering more robust evidence on the value of bacteriophage treatments and to further discussing the scientific and regulatory aspects relating to the biological characterization of the phages. Pharmaceutical companies are encouraged to engage in early dialogue with EMA by applying for scientific advice through which they can receive further guidance on how to develop their products. (Ben Hirschler, Huffington Post)

Quick note!

Marcia McNutt, editor-in-chief of the Science family of journals, has been nominated to become the first female president of the U.S. National Academy of Sciences.

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

July 10, 2015 at 10:31 am