Science Policy For All

Because science policy affects everyone.

Posts Tagged ‘scientific publishing

Science Policy Around the Web – June 23, 2017

leave a comment »

By: Saurav Seshadri, PhD

Drug Policy

Trump’s New Policy to Tackle Sky-High Drug Prices Makes Sense — Sort Of

Tackling high prescription drug prices was a repeated promise of the Trump campaign. The Trump administration has now taken its first step towards fulfilling this pledge, outlined in a blog post by Food and Drug Administration (FDA) commissioner Scott Gottlieb. The agency will pursue a Drug Competition Action Plan, whose goal will be to eliminate obstacles to the development of cheap generic drugs – particularly those caused by loopholes in existing FDA policies, which are exploited by pharmaceutical companies to extend their patent exclusivity period and maximize profits. An example of such ‘gaming’ the system, cited in the post, is the practice of limiting access to branded products for comparative testing by generic developers. Ultimately, the FDA will work closely with the Federal Trade Commission (FTC) to address such issues, since directly regulating business practices is outside its mandate.

On its face, the FDA’s effort is a step in the right direction. Availability of generics reduces the cost of medications by over half within the first year, and according to a recent Congressional report, manufacturers state that ‘competition…is the primary driver of generic drug prices’. However, it ignores evidence that the real driver of increased drug spending is new, branded medicines, not overpriced generics. In fact, early indications are that Trump’s policies will favor the pharmaceutical companies that produce such medicines, by reducing regulations and apparently abandoning his promise to enable the government to negotiate drug pricing through Medicare. Overall, these actions signal a commitment to promoting free market mechanisms in the pharmaceutical industry; time will tell whether this approach will actually lead to more affordable drugs. (Julia Belluz, Vox)

Cancer

In a Major Shift, Cancer Drugs go ‘Tissue-Agnostic’

With the landmark approval of Keytruda in May, the Food and Drug Administration (FDA) appears to have ushered in a new era of cancer drug development.  So far, cancer treatment and drug evaluation have largely used the tumor’s tissue of origin as a starting point. Keytruda (an immune system enabling drug developed by Merck and approved for melanoma in 2014) marked the first departure from this approach, receiving priority approval to treat any solid tumor containing a mutation in the mismatch repair pathway, regardless of context. Recently released data suggests that another tissue-agnostic cancer therapy is on the way: larotrectinib (a cell growth inhibitor developed by Loxo Oncology) showed high efficacy for any tumor with a certain biomarker (TRK fusion). Several other such drugs, whose indications will be based on tumor genetics rather than location, are in the clinical pipeline.

Although these advances have generated significant excitement in the cancer community, some caveats exist. First, identifying the patients that could benefit from tissue-agnostic treatments will require individual initiative and depend on the cost of screening, particularly when considering markers that are rare for a certain tumor type. A potential solution is suggested by the NCI-MATCH trial, part of the NIH’s Precision Medicine Initiative (PMI) – in it, patients can enroll in one of several parallel clinical trials if a corresponding drug-targeted mutation is found in their tumor’s genome. If these trials prove effective, patients could eventually be regularly matched with a personalized, tissue-agnostic, biologically valid treatment, based on a standardized screen.  Second, researchers caution that tissue-agnostic studies should have a strong scientific rationale and/or breakthrough-level efficacy. Otherwise, such efforts ‘could actually slow drug development if there are differential effects across tumor types by diverting resources from enrolling patients in a predominant population or in the tumor type most likely to respond’.

Despite these concerns, the tissue-agnostic paradigm offers great promise for cancer patients. NIH-funded resources such as The Cancer Genome Atlas could be invaluable to this field moving forward. (Ken Garber, Science)

Scientific Publishing

US Court Grants Elsevier Millions in Damages from Sci-Hub

A New York district court has awarded academic publishing giant Elsevier $15 million in damages from Alexandra Elbakyan, founder of the website Sci-Hub, for copyright infringement. Elbakyan, a 27-year-old neuroscientist turned programmer, started Sci-Hub in 2011 with the goal of ‘remov[ing] all barriers in the way of science’. The site allows users to download research papers that would normally be blocked by a paywall, by obtaining credentials from subscribing institutions and using them to access publisher-run databases like ScienceDirect. Over 60 million papers are posted on Sci-Hub, and users downloaded 28 million articles in 2016.

Elbakyan’s case is reminiscent of Aaron Swartz, another high-profile champion of open access to scientific research. Faced with federal charges related to his hacking of journal archive JSTOR, Swartz tragically committed suicide in 2013. Both Elbakyan and Swartz found publishers’ ability to profit from restricting access to scientific literature, effectively withholding knowledge from anyone outside of a privileged inner circle, as well as the legal protection provided to this system, to be deeply unethical. Their willingness to act upon these convictions has earned each a sizable following in the scientific community.

For their part, publishers claim that fees go towards overhead, and point to significant efforts to expand free and open access programs. While judges have so far been sympathetic, Elsevier’s legal battle has been largely one-sided. Elbakyan has been ignoring rulings requiring her to shut down Sci-Hub since 2015, opting to simply change domains instead, and since she is currently based in Russia and has no American assets, she is unlikely to pay any damages. (Quirin Schiermeier, Nature News)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

June 23, 2017 at 11:00 am

Science Policy Around the Web – May 24, 2017

leave a comment »

By: Joel Adu-Brimpong, BS

Source: Flickr by Selena N. B. H. via Creative Commons

Scientific Publishing

Fake It Until You’re Caught?

The beauty of the scientific enterprise is that it is, eventually, self-correcting. Thus, occasionally, a scientific paper may be retracted from a journal based on new revelations or due to reports of ethical breaches. Tumor Biology, a peer-reviewed, open access journal disseminating experimental and clinical cancer research, however, seems to have set a record for the number of retracted papers at once. In a single notice, in April, Tumor Biology retracted 107 articles; yes, one hundred and seven!

Springer, the former publisher of Tumor Biology, reported that the retracted papers were due to a compromised peer review process. Like other journals, Tumor Biology allows the submission of preferred reviewer information (name and email address) when submitting a manuscript. In the case of the retracted papers, “the reviewers were either made up, or had the names of real scientists but false email addresses.” Unsurprisingly, the manuscripts sent to the fake reviewers consistently received positive reviews, bolstering the likelihood of publication.

Springer, of course, is not the first and only major publisher to uncover issues in its peer-review process leading to mass retractions. A 2016 paper reveals similar issues from other major publishers including SAGE, BioMed Central and Elsevier. These breaches are particularly worrisome as some of the retracted manuscripts date back to the beginning of the decade. This means that studies floating around in other journals may have built on knowledge reported by the retracted studies. As if this was not enough, Springer has also come under scrutiny for individuals listed on Tumor Biology’s editorial board, several of whom appear to have no association with the journal and/or in at least one case, have been deceased for several years.

These discoveries are particularly disturbing and are percolating at a time when biomedical research spending is increasingly being scrutinized. Richard Harris, the award-winning NPR journalist, in his recent book Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hope, and Wastes Billions (2017), highlights major areas in biomedical research that produce wastes, such as studies that may incite researchers, and even whole fields, to follow a phantom lead. In the meantime, it does appear that journals are taking measures to ensure that these breaches are minimized, if not prevented entirely. (Hinnerk Feldwisch-Drentrup, ScienceInsider)

Research Funding

Fighting On All Fronts: Republican Senators Advocate for DOE’s Research Funding

Republican senators are, again, urging President Trump to rethink potential budget cuts to research programs; this time to the Department of Energy (DOE). On Thursday, May 18, 2017, six top senate republicans, including well-known congresspersons Lamar Alexander (R-TN), Lindsey Graham (R-SC) and Lisa Murkowski (R-AK), drafted a letter to the President reminding him of the importance of government-sponsored research. In the letter, they re-echo, “Government-sponsored research is one of the most important investments our country can make to encourage innovation, unleash our free enterprise system to create good-paying jobs, and ensure American competitiveness in a global economy.” They go on, “It’s hard to think of an important technological advancement since World War II that has not involved at least some form of government-sponsored research.”

If it seems like we’ve been down this road before, it’s because we have. Earlier this year, Rep. Tom Cole (R-OK), on the House Appropriations and Budget Committee, and his colleagues signaled disagreement with proposed budget cuts to the NIH and CDC in President Trump’s fiscal blueprint. The Republican congressman reiterated the importance of agencies like the NIH and CDC in conducting crucial biomedical research and leading public health efforts that protect Americans from diseases. The strong commitment to advancing biomedical research and the health of the American people led to an omnibus agreement that repudiated President Trumps proposed cuts, increasing NIH funding by $2 billion for the 2017 cycle.

The letter by Senator Alexander and colleagues was drafted following reports suggesting that the DOE’s Office of Energy Efficiency and Renewable Energy could face a reduction in funding of up to 70 percent for the 2018 fiscal cycle.  In a separate follow-up analysis, Democrats on the Joint Economic Committee reported on the growth and importance of clean energy jobs and its contribution to the economy. Cuts to the DOE’s research programs could have profound impact on not only millions of jobs but also America’s ability to stay competitive in the global economy as it shifts towards renewable energy and resources. (Geof Koss, ScienceInsider)

Have an interesting science policy link?  Share it in the comments!

Scientific Activism: Voting to Speed Up Discovery with Preprint Publishing

leave a comment »

By: Thaddeus Davenport, PhD

Source: Public Library of Science, via Wikimedia

         The election of Donald Trump to the Oval Office and the early actions of his administration have sparked a wave of protests in support of women’s rights and immigration, among other issues. Like other citizens, scientists have some cause to be concerned about the administration’s early actions that reveal a general disregard for facts and scientific evidence. In response, organizers have planned the March for Science for this Saturday, April 22nd, as an opportunity for people to gather in cities around the world to voice their support for factual information and scientific research. And while it is important to denounce the actions of the Trump administration that are harmful to science and health, it may be even more critical to acknowledge the underlying partisan divisions that created a niche for his rhetoric and to begin the difficult work of bridging the divide. For example, a Pew Research Center poll from 2015 indicates that 89% of liberal Democrats believe government investment in basic science pays off in the long-run, while only 61% of conservative Republicans feel the same way. Additionally, American adults with less knowledge of scientific topics are more likely to believe that government funding of basic science does not pay off. This suggests that improved science education and outreach will be important in building public support for scientific research. However, scientists often lead very busy lives and have little time outside of their professional activities to devote to valuable pursuits like science outreach. How, then, might scientists work towards building a better relationship with the public?

The products of science – knowledge, medicines, technology – are the clearest evidence of the value of research, and they are the best arguments for continued research funding. Efficiency in science is good not only for scientists hoping to make a name for themselves, but also for the public, who as the primary benefactors of academic research, must benefit from the products of that research. If taxpayers’ demand for scientific inquiry dissipates because of a perceived poor return on their investment, then the government, which supposedly represents these taxpayers, will limit its investment in science. Therefore, in addition to communicating science more clearly to the public, scientists and funding agencies should ensure that science is working efficiently and working for the public.

Information is the primary output of research, and it is arguably the most essential input for innovation. Not all research will lead to a new product that benefits the public, but most research will yield a publication that may be useful to other scientists. Science journals play a critical role in coordinating peer review and disseminating new research findings, and as the primary gatekeepers to this information, they are in the difficult position of balancing accessibility to the content of their journals with the viability of their business. This position deserves some sympathy in the case of journals published by scientific societies, which are typically non-profit organizations that perform valuable functions including scientific outreach, education and lobbying. However, for-profit journals are less justified in making a significant profit out of restricting access to information that was, in most cases, obtained through publicly-funded research.

Restricting access to information gathered in the course of research risks obscuring the value of research to a public that is already skeptical about investing in basic science, and it slows down and increases the cost of innovation. In light of this, there is growing pressure on publishers to provide options for open-access publishing. In 2008, the National Institutes of Health adopted a public access policy, which requires that “investigators funded by the NIH submit or have submitted for them to the National Library of Medicine’s PubMed Central an electronic version of their final, peer-reviewed manuscripts upon acceptance for publication, to be made publicly available no later than 12 months after the official date of publication: Provided, that the NIH shall implement the public access policy in a manner consistent with copyright law.” This policy was extended through an executive order from the Obama Administration in 2013 to include all federal agencies with research budgets greater than $100 million, with additional requirements to improve accessibility.

These requirements are changing scientific publishing and will improve access to information, but they remain limited relative to the demand for access, as evidenced by the existence of paper pirating websites, and the success of open access journals like PLoS and eLife.  Additionally, other funding agencies like the Bill and Melinda Gates Foundation and the Wellcome Trust have imposed even more stringent requirements for open access. Indeed, researchers will find a spectrum of open-access policies among the available journals, with the most rapid access to information allowed by so-called ‘preprint’ publishers like biorxiv.org. Given that many research manuscripts require months or years of revision and re-revision during submission to (usually multiple) journals, preprint servers accelerate the dissemination of information that is potentially valuable for innovation, by allowing researchers to post manuscripts prior to acceptance in a peer-reviewed journal. Many journals have now adopted explicit policies for handling manuscripts that have been previously submitted to bioRxiv, with many of them treating these manuscripts favorably.

Given that most journals accept manuscripts that have been previously published on bioRxiv, and some journals even look to bioRxiv for content, there is little incentive to submit to journals without also submitting to bioRxiv. If the goal is, as stated above, to improve the transparency and the efficiency of research in order to make science work for the public, then scientists should take every opportunity to make their data as accessible as possible, and as quickly as possible. Similarly, funding agencies should continue to push for increased access by validating preprint publications as acceptable evidence of productivity in progress reports and grant applications, and incentivizing grant recipients to simultaneously submit manuscripts to preprint servers and peer-reviewed journals. Scientists have many options when they publish, and by voting for good open-access practices with their manuscripts, they have the opportunity to guide the direction of the future of scientific publishing. These small, but important, actions may improve the vitality of research and increase the rate at which discoveries tangibly benefit taxpayers, and, in combination with science outreach and education, may ultimately strengthen the relationship between scientists and the public.

March for Science this Saturday, if it feels like the right thing to do, and then strive to make science work better for everyone by sharing the fruits of research.

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

April 20, 2017 at 11:44 am

Science Policy Around the Web – February 17, 2017

leave a comment »

By: Thaddeus Davenport, PhD

Source: pixabay

CRISPR

Decision in the CRISPR-Cas9 Patent Dispute

This week, Heidi Wedford from Nature News reported that the United States Patent and Trademark Office (USPTO) made a decision on the disputed patents for the gene editing technology known as CRISPR-Cas9 in favor of the Broad Institute of MIT and Harvard. The CRISPR-Cas9 system has been widely publicized, and this publicity is arguably not out of proportion with the potential of this technology to simplify and accelerate the manipulation of DNA of both microbial (prokaryotic) and higher order (eukaryotic) cells for research and therapy. A simplified, programmable version of CRISPR-Cas9 for use in gene editing was initially described by Charpentier and Doudna, and it was rapidly translated for use in eukaryotic cells by Zhang and colleagues at the Broad Institute in parallel with Doudna, Charpentier, and others.

The USPTO decision follows a dramatic and ongoing dispute over whether the patent application submitted by the University of California on behalf of Doudna and Charpentier – which was submitted before that of the Broad Institute, and described the technology in broad terms as a method of cutting desired DNA sequences – was sufficient to protect the CRISPR-Cas9 intellectual property when the Broad Institute later filed a fast-tracked patent application describing the use of CRISPR-Cas9 for use in eukaryotic cells. Because the Broad Institute’s application was expedited, it was approved before the University of California’s application. In January of 2016, the University of California filed for an ‘interference’ proceeding, with the goal of demonstrating to the USPTO that Doudna and colleagues were the first to invent CRISPR-Cas9, and that the patent application from the Broad Institute was an ‘ordinary’ extension of the technology described in the University of California application.

On February 15th of this year, the USPTO ruled that the technology described in the Broad Institute’s application was distinct from that of the University of California’s. The importance of this decision is that the patents granted to the Broad Institute for the use of CRISPR-Cas9 in mammalian cells will be upheld for now. It also creates some complexity for companies seeking to license CRISPR-Cas9 technology. Because of the overlapping content of the CRISPR-Cas9 patents held by the University of California and the Broad Institute, it is possible that companies may need to license the technology from both institutions. The University of California may still appeal the USPTO’s decision, but this is a significant victory for the Broad Institute for the time being. For many scientists, this dispute is a dramatic introduction to the inner workings of the patent application process. We would do well to familiarize ourselves with this system and ensure that it works effectively to accurately reward the discoveries of our fellow scientists and to facilitate the transfer of technology to those who need it most, without imposing undue economic burden on companies and consumers. (Heidi Wedford, Nature News)

Scientific Publishing

Open Access to Gates Foundation Funded Research

Also this week, Dalmeet Singh Chawla reported for ScienceInsider that the Bill and Melinda Gates Foundation had reached an agreement with the American Association for the Advancement of Science (AAAS) that will allow researchers funded by the Gates Foundation to publish their research in the AAAS journals Science, Science Translational Medicine, Science Signaling, Science Immunology, and Science Robotics. This agreement follows an announcement in January in which the Gates Foundation decided that research funded by the foundation would no longer be allowed to be published in subscription journals including Nature, Science, and New England Journal of Medicine, among others, because these journals do not meet the open access requirements stipulated by the new Gates open-access policies. The new Gates Foundation policy requires its grant recipients to publish in free, open-access journals and to make data freely available immediately after publication for both commercial and non-commercial uses. A similar policy is being considered by the nascent Chan Zuckerberg Initiative.

In the agreement with AAAS, the Gates Foundation will pay the association $100,000 in order to make Gates-funded published content immediately freely available online. Convincing a journal as prominent as Science to make some of its content open-access is a step in the right direction, but it is perhaps more important as a symbol of a changing attitude toward publishing companies. Michael Eisen, co-founder of the Public Library of Science (PLoS) open-access journals, was interviewed for the ScienceInsider article and noted, “[t]he future is with immediate publication and post-publication peer review, and the sooner we get there the better.” This sentiment seems to be increasingly shared by researchers frustrated with the hegemony of the top-tier journals, their power over researchers’ careers, and the constraints that subscription-based journals impose on the spread of new information. Funding agencies including the Gates Foundation, Howard Hughes Medical Institute, and the National Institutes of Health are in a unique position to be able to dictate where the research they fund may be published. A collective decision by these agencies to push the publishing market towards an improved distribution of knowledge – through open-access publishing and post-publication peer review – and away from the historical and totally imagined importance of validation through high-tier journal publication would enrich the scientific ecosystem and accelerate innovation. In this regard, the efforts by the Gates Foundation are laudable and should be extended further. (Dalmeet Singh Chawla, ScienceInsider)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

February 17, 2017 at 12:44 pm

Science Policy Around the Web – January 27, 2017

leave a comment »

By: Nivedita Sengupta, PhD

Source: NIH Image Gallery on Flickr, under Creative Commons

Human Research Regulation

US Agency Releases Finalized ‘Common Rule’, Which Govern Human-Subjects Research

On September 8, 2015 the US Department of Health and Human Services (HHS) proposed significant revisions to the Federal Policy for the Protection of Human Subjects which is also known as the “Common Rule”. “Common Rule” is the set of federal regulations governing the conduct of clinical research involving human subjects. Among the proposed changes, an important one was regarding getting peoples’ consent before using the biological samples for subsequent studies. On 18th January 2017, the final version of the rule was released in which the proposed change was abandoned. This is a blow to the patient-privacy advocates, however the US National Academies of Sciences, Engineering and Medicine argued against that requirement and others citing that the changes would impose an undue burden on researchers and recommended that it be withdrawn.

The current version of Common Rule has generated mixed feelings among people. Researchers are happy that the government listened to scientists’ fears about increased research burdens whereas people like Twila Brase, president and co-founder of Citizens’ Council for Health Freedom in St Paul, Minnesota, are disappointed as they believe that these specific changes are ought to be made. Moreover the new version of the Common Rule requires that scientists include a description of the study, along with the risks and benefits, on the consent forms used by patients, and federally-funded trials should post patient consent forms online. However, these requirements do not extend to trials that are conducted with non-federal funds. (Sara Reardon, Nature News)

Biomedical Research

An Open-Science Effort to Replicate Dozens of Cancer-Biology Studies is Off to a Confusing Start

The Reproducibility Project on Cancer Biology was launched in 2013 to scrutinize the findings of 50 cancer papers from high-impact journals. The aim is to determine the fraction of influential cancer biology studies that are sound. In 2012, researchers at the biotechnology firm Amgen performed a similar study and announced that they had failed to replicate 47 of 53 landmark cancer papers but they did not identify the studies involved. In contrast, the reproducibility project makes all its findings open. Full results should appear by the end of the year and eLife is already publishing five fully analyzed reports in January. Out of the five, one failed to replicate and the remaining four showed replication results that are less clear.

These five results paint a muddy picture for people waiting for the outcome to determine the extent of impact of these studies. Though some researchers praised the project, others feared unfair discredit of their work and career. According to Sean Morrison, a senior editor at eLife, the reason for the “uninterpretable” results is “Things went wrong with tests to measure the growth of tumors in the replication attempts and the replication researchers were not allowed to deviate from the protocols, which was agreed at the start of the projects in consultation with the original authors”. “Doing anything else — such as changing the experimental conditions or restarting the work — would have introduced bias”, says Errington, the manager of the reproducibility project.

According to Errington, the clearest finding from this project is that the papers include very few details about their methods. The replication researchers had to spend hours to work out the detailed protocols and reagents along with the original authors. Even after following the exact protocols, the final reports include many reasons why the replication studies might have turned out differently, including variations in laboratory temperatures to tiny variations in how a drug was delivered. He thinks that the project helps to bring out such confusing details to the surface, and it will be a great service for future follow up work to develop a cure for cancer. However, scientists think that such conflicts mean that the replication efforts are not very informative and couldn’t be compared to the original and will only cause delays in advancing future clinical trials. (Monya Baker and Elie Dolgin, Nature News)

 

Have an interesting science policy link?  Share it in the comments!

Science Policy Around the Web – July 22, 2016

leave a comment »

By: Nivedita Sengupta, Ph.D.

photo credit: Alex E. Proimos via photo pin cc

The Common Rule and human testing

Science academies blast US government’s planned research-ethics reforms

The ultimate advancement in human health and welfare depend on research with human subjects. To achieve this, properly controlled studies with human subjects is imperative for eliminating abuse of human subjects and proper protection of the data. To address these concerns the “Common Rule” was established in 1991 influenced by the Belmont Report, a 1978 document which laid out principles for ethical research with humans, such as minimizing patient harm and maximizing the benefit to society. The ‘Common Rule’ is the current human subject regulation policies which addresses ethical issues such as informed consent, storage of study participants’ biological specimens and many others. However with technological advances over time, achieving these goals has become more complicated and thus imposes difficulties in maintaining patient privacy. Hence in September, 2015 the US government proposed revisions for regulations governing studies of human subjects.

Recently however, an independent advisory panel proposed that the US government’s proposed overhaul of  the Common Rule is flawed and should be withdrawn. On June 29th the US National Academies of Sciences, Engineering and Medicine said that the government’s proposed changes are “marred by omissions and a lack of clarity”. They indicated that it would slow down research and will do little to improve protections of patients enrolled in studies. The panel recommended that the government should appoint an independent commission to craft new rules for such research.

The changes proposed by the US Department of Health and Human Services (HHS) attempted to address concerns that have arisen since the ‘Common Rule’ was established. For instance, the HHS reforms suggests a requirement for participants’ consent to use stored samples, such as blood or tissue, for future research. But the US academies’ panel pointed that the new consent requirements would slow research unnecessarily because little harm is likely to come to a person as a result of the use of stored samples. Moreover the extra consent forms can link the samples to the person’s name and thus increasing risk of identification.

Currently HHS is reviewing more than 2,100 public comments to its proposal and many of these comments were critical. However, the US academies’ panel says that the proposal should be scrapped and HHS should start fresh by appointing an independent commission to recommend reforms for the Common Rule. Meanwhile an HHS spokesperson said that the government is still pondering over the public’s comments and the report. She adds that the proposal comes after “many years of work,” and “that starting over would require many more.” (Sara Reardon, Nature News)

Scientific Publishing

Beat it, impact factor! Publishing elite turns against controversial metric

Journal Impact factor (JIF) – one of the most promoted and controversial metric system in the field of science is currently facing negative reviews from the scientific community. Impact factor is a measure of the average number of citations that articles published by a journal in the previous two years have received in the current year which is calculated by various companies. It is solely aimed to indicate the quality of journals said Heidi Siegel, a spokesperson for Thomson Reuters, the major publisher of JIFs. However, the irony is researchers often use the JIF to judge individual papers instead and in some cases even the authors.

On July 5th, several leading science publishers posted a paper to the preprint server bioRxiv asking all journals to consider a different metric which will capture the range of citations that a journal’s articles acquire. Also the American Society for Microbiology in Washington DC announced its plans to discard impact factor from its journals and website, and also from marketing and advertising.

Stephen Curry, a structural biologist at Imperial College London and also the lead author on the bioRxiv preprint paper said that sadly many researchers evaluate papers by the impact factor of the journals and this can also influence decisions made by hiring committees and funding agencies. Curry’s team highlighted some limitations by plotting the distribution of citations (used to calculate the 2015 impact factors) for articles published in 2013–14 in 11 journals, including ScienceNatureeLife. They showed that most of the papers gathered fewer citations than the impact factor for the journal: 74.8% of Nature articles were cited below its impact factor of 38.1, and 75.5% of Science papers were cited fewer than 35 times with its impact factor of 34.7. Highly cited papers are the cause of this disconnect as Nature’s most cited paper in the analysis was referenced 905 times and Science’s 694 times. Curry and his team highly recommends the use of citation distribution curves instead of JIF as it provides a more informative snapshot of a journal’s standing.

However, Ludo Waltman, a bibliometrics researcher at Leiden University in the Netherlands, thinks that citation distributions are more relevant than impact factors for making decisions in hiring and promotion. But he feels that impact factors can be useful for researchers in some cases. Nonetheless anti-impact-factor campaigners believes that it will take time and pressure from various directions to diminish the influence of impact factor as it has become a cultural thing in the scientific field. (Ewen Callaway, Nature News)

Brain research advancements

Human brain mapped in unprecedented detail

Neuroscientists have long sought to divide the brain into smaller pieces to better appreciate how it works as a whole. On July 20th,  Nature published the new unprecedented map of the brain’s outermost layer — the cerebral cortex — subdividing each hemisphere and valley-like folds into 180 separate areas. Ninety-seven of these areas have never been previously described despite showing clear differences in structure, function and connectivity from the neighboring areas.

“Until now, most brain maps were based on a single type of measurement which provides an incomplete view of the brain’s inner workings” says Thomas Yeo, a computational neuroscientist at the National University of Singapore. This new map is based on multiple MRI measurements which measures the flow of blood in response to different mental tasks, which Yeo says “greatly increases confidence that they are producing the best in vivo estimates of cortical areas.”

The map was constructed by a team of people led by neuroscientist Mathew Glasser at Washington University Medical School. They collected imaging data from 210 healthy young adults participating in the Human Connectome Project, an National Institutes of Health-funded initiative to map the brain’s structural and functional connections. They collected information of cortical thickness; brain function; connectivity between regions; topographic organization of cells in brain tissue; and levels of myelin — fatty substance that speeds up neural signaling. The borders on the map was delineated by areas which showed significant changes in two or more of these properties. Analysis of all the data confirmed the existence of 83 previously reported brain areas while identifying 97 new ones. Scientists further tested the map generated by Glasser and his team, and found it accurate by looking for these regions in the brains of additional 210 people. But the size of the areas varied from person to person and these differences may reveal new insights into individual variability in cognitive ability and opens up the possibility to explore further the unique intersection of individual talents with intellectual and creative abilities.

But the map is limited in some important ways as it reveals little about the biochemical basis of the brain and about the activity of single neurons or small groups. However, Glasser says that “We’re thinking of this as version 1.0, that doesn’t mean it’s the final version, but it’s a far better map than the ones we’ve had before.” (Linda Geddes, Nature News)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

July 22, 2016 at 9:00 am

Science Policy Around the Web – May 10, 2016

leave a comment »

By: David Pagliaccio, Ph.D.

Source: Ashley Fisher / Flickr

Scientific Publishing

Who’s downloading pirated papers? Everyone

Sci-Hub is an online repository for millions of scientific and academic articles, which has sparked major controversy among the scientific and publishing communities. The site, launched in 2011 by Alexandra Elbakyan, a graduate student in Kazakhstan, provides free access to ‘pirated’ articles. These articles would otherwise only be accessible through personal or institution journal subscriptions or by purchasing individual articles, which often can cost ~$30 each. Recent analysis of Sci-Hub’s 28+ million download requests from September-February 2015 found that requests were coming from over 3 million different IP addresses (potentially many more individual users as those sharing university internet network will often share an IP address). These download requests came from all over the world and across all types of scientific fields. Download rates reached more than 200,000 per day. An opinion survey regarding Sci-Hub found that at least half of users download articles from Sci-Hub because they do not otherwise have access to the articles at all. Interestingly, many others use Sci-Hub purely out of convenience when they would still have access through their institution. Many respondents also use Sci-Hub in objection to the profits made by publishers off of academics and feel that efforts like Sci-Hub have the power to disrupt the status-quo of science publication. That said, Elsevier, of the largest publishers affected by Sci-Hub, launched a lawsuit against Elbakyan last year for infringing on their legal rights as copyright holders. Despite having their domain seized during the lawsuit, Sci-Hub is largely beyond the reach of the U.S. legal system by being based in Russia. This is an still evolving situation and debate, which may have large effects on the state of scientific publishing today particularly given the major support from much of scientific community. (John Bohannon, Science News)

Mental Health

New Study Shows Mental Health Diagnoses and Treatment Vary Significantly by Race and Ethnicity

The Department of Research and Evaluation at Kaiser Permanente published result of a large study in the journal Psychiatric Services regarding the diagnosis and treatment of mental health conditions. The study included data from electronic health records of 7.5 million adult patients. The patients were part of 11 private, not-for-profit health care systems participating in the Mental Health Research Network. The results indicated that 15.6% (1.17 million) of these patients received a mental health diagnosis in 2011. This varied by race and ethnicity from 7.5% among Asians to 20.6% among Native American/Alaskan Native patients. Most groups had generally lower diagnosis rates than non-Hispanic white patients. Importantly, regardless of race and ethnicity, all patients with a diagnosed mental health condition were much more likely to receive psychiatric mediations (73%) than they were to receive formal psychotherapy treatment (34%). While the study does not point to any specific causative factors, they do indicate a need for evaluation of the causes and effects of racial and ethnic differences in diagnosis and treatment of mental health conditions as well as those relating to the vast discrepancy in treatment by medication vs. therapy. (PR Newswire)

Child Development Policies

Bringing Brain Science to Early Childhood

Researchers at Harvard’s Center on the Developing Child are pushing better use of developmental psychology and neuroscience research in the creation and implementation of policy regarding early-childhood programs. Particularly, they critique incentives in the current policy system and call for research and development on the most effective early-childhood programs for stemming intergenerational poverty. Programs for child development should all be based on the rapidly evolving knowledge base in the scientific field and should be allowed to develop as we learn and understand more. Work in this area has shown lifelong consequences of early childhood stress as well as lifelong benefits of early positive parenting both on mental and physical health. The Center has already been to pilot programs in Washington state aimed at improving executive function and self-control among parents and children and hopefully to improve parental engagement. This work allows for testing and refining of new interventions based on data collected from the pilot testing. On the other hand, many interventions have previously been enacted at large-scale without adequate follow-up testing or methods for improvement based on outcomes. For example, they cite that the Head Start program, which aims to help young disadvantaged infants and children, has but has not utilized the infrastructure to evaluate the effectiveness of their various programs and to identify which programs benefit which types of individuals most. As research suggests, intervening early in development can be incredibly impactful, and thus we should be capitalizing on our scientific understanding to implement the most evidence-based programs and utilizing outcomes data to constantly improve our programs. (Emily Deruy, The Atlantic)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

May 10, 2016 at 9:00 am