Science Policy For All

Because science policy affects everyone.

Posts Tagged ‘bias

Science Policy Around the Web – July 27, 2018

leave a comment »

By: Emily Petrus, Ph.D.

20180727_Linkpost

source: pixabay

Innovation

Artificial Intelligence Has a Bias Problem, and It’s Our Fault

While computer and data scientists are working to create systems which can reason and perform complex analysis, groups of ethicists, lawyers and human rights advocates express increasing concerns about the impact artificial intelligence will have on life. It is becoming apparent that human bias regarding race, gender and socioeconomic position also influence algorithms and data sets used to train machine learning software.

Most artificial intelligence (AI) systems are trained on data sets culled from the internet. This results in skewed data which over-represents images and language from the United States. For example, a white woman in a white dress results in algorithms labeling a picture as “bride” or “wedding”, while an image of a North Indian bride is labeled as “performance art”. If that seems like a harmless hiccup, think about algorithms designed to detect skin cancer from images. A recently published study did a decent job detecting dark moles on light skin, but only 5% of the data set depicted dark skinned people, and the algorithm wasn’t even tested on that data set. This bias could skew accurate diagnoses for already underserved minority populations in the United States. Finally, AI will have a huge impact on financial markets beyond the replacement of humans to do jobs, particularly in manufacturing. Decisions on loan eligibility and job candidate hiring decisions are being filtered through AI technology, which is guided by data which may be biased.

It is apparent that computer scientists must make concerted efforts to un-bias data training sets and increase transparency when they develop new AI systems. Unfortunately, these common-sense suggestions are just that: suggestions. Before Obama left office in Fall 2016, a roadmap was created by the administration to guide research and development of AI systems. There’s no teeth in policy dictating fairness and inclusivity in AI development, but private and academic institutions are making gains in this arena. The Human-Centered AI project at Stanford University and Fairness, Accountability, Transparency, and Ethics (FATE) in AI research group at Microsoft are two examples of these types of efforts. Both groups seek to increase inclusivity in AI algorithms and reduce bias – human and computer generated. AI can also be trained to detect biases in both training data and the models by conducting an AI audit. An effort of developers in academia and private industry will be necessary to produce and prove their AI is unbiased, and it is unlikely that federal regulations would have the power or dexterity to administer any concrete regulations regarding this technology. Like most other scientific advances which bring significant monetary gains, the pace is breakneck but corners should not be cut. Legislation is unlikely to be able to keep up with the technology, but incentives to keep the playing field fair should come from within the AI community itself.

(Ben Dickson, PC Mag)

Scientific oversight

NIH delays controversial clinical trials policy for some studies

How does the brain process images of faces? How do we respond to frustrating situations? What does the mind of a sociopath look like in an MRI? These are all basic science questions in brain research which may discover treatment options for future studies. But for the moment, no drugs or interventions are being tested in many basic research labs funded by the National Institutes of Health (NIH). This means they’re not clinical interventions, or by definition, clinical trials, right? Maybe…

Basic researchers studying the healthy human brain sigh a breath of relief as the NIH decided to delay new rules applying to the classification of human trials. At issue is the re-classification of research which can be considered a clinical trial. The intent of the new guidelines was to increase reproducibility and transparency in government funded human research, for example requiring more rigorous statistical practices.  In practice, investigators will be required to upload their studies to clinicaltrials.gov, take mandatory trainings, and produce significantly more paperwork to continue receiving funding for their basic research. In addition, researchers were concerned that this would create more confusion in the public, as their research would be inaccurately represented as a clinical trial.

After the announcement last year, professional societies and academics sent letters of complaint to NIH, prompting congress to delay the implementation of the requirements to September 2019. This delay also gives leniency to basic researchers who apply to funding opportunity announcements seeking studies labeled as clinical trials, meaning they would not be immediately disqualified from being scored. Although many researchers hoped the NIH would drop all requirements for basic research, the delay is welcome for now. “This delay is progress because it gives them more time to get it right, and in the interim people aren’t going to be in trouble if they get it wrong,” said Jeremy Wolfe, a cognitive psychologist at Harvard Medical School.

(Jocelyn Kaiser, Science)

Have an interesting science policy link? Share it in the comments!

Advertisements

Written by sciencepolicyforall

July 27, 2018 at 4:51 pm

Posted in Linkposts

Tagged with , , , ,

Science Policy Around the Web – July 24, 2018

leave a comment »

By: Janani Prabhakar, Ph.D.

20180724_Linkpost

source: pixabay

The Scientific Workforce

Has the tide turned towards responsible metrics in research?

Quantifying progress and success in science has long been a challenging task. What is the correct metric to use? Mathematics can provide insight on the general impact of journals or calculate the productivity based on publication rate of an individual researcher. But, predictive statistics are much less common. Predictive statistics and machine learning approaches are used in other industries and sectors quite often. For example, as this article points out, predictive statistics and modeling are used in baseball to identify new talent. Why not in academia? There are private companies that provide such statistics like Academic Analytics. They offer a version of multiple existing metrics to measure potential success, including citations, H-indices, impact factors, and grant income. The desire for such statistics come from those making hiring decisions within academia to policymakers when making budgetary decisions. The need for quantifying potential success is apparent, but when, where, and how is still in hot debate, as well as how to do so ethically. The San Francisco Declaration on Research Assessment (Dora) called for an end to using journal impact factors in funding and hiring decisions, and focus on other metrics instead. UK has made some strides to ensure that any metrics used still cohere to the principles of science and reflect ‘responsible metrics.’ These metrics include robustness, humility, transparency, diversity, and reflexivity. A recent report evaluates the success of these metrics and implementations over the last five years. Out of 96 UK universities and research organizations, 21 have already agreed to follow these metrics. Some universities have begun to implement their own policies beyond those outlined in Dora. This has led to increasing data on good metrics and practices for Universities to begin to shift policy to improve the academic environment, reduce abuse, and to employ responsible management practices. This data is a great resource for making change in Universities at the global level.

(James Wilsdon, The Guardian)

Psychology

Confronting Implicit Bias in the New York Police Department

After a long history of police brutality towards black men, the role of racial bias has fallen front and center in the American dialogue. Implicit bias reflects the kinds of biases that are unintentional, unconscious, and more pervasive than racial bias alone. Erasing such biases requires overcoming one’s own stereotypes and using facts to make rational decisions. As part of Mayor Bill de Blasio’s police reform efforts, a training program on implicit bias will run through next year, conducted by Fair and Impartial Policing, a Florida company that provides such training programs for many police departments. While this program will cost the city $4.5 million, there is no data, yet, to assess the training’s effectiveness. The lack of objective data is troubling to policy makers and researchers, given the spread of this training across many police departments. Dr. Patricia G. Devine, a professor at the University of Wisconsin, has stated that we need to first know more about officers’ unintentional biases to determine whether the training has a significant effect. Furthermore, the longevity of the training effects need to be determined both in terms of changes in officer behaviors as well as the extent to which the community has benefitted. Despite the lack of such data, feedback from trainers suggests that over the course of the training period, initial hesitance in police officers turns to a better appreciation for the role of stereotypes in action selection. For many police officers, the training is an opportunity to reflect upon their own behaviors and make meaning out of them from the perspective of their own tendencies to racially stereotype. The training isn’t meant to cure officers of their biases, but rather to help them confront and manage their own biases. Police officers are shown case studies of situations where biases result in differences in the way police officers confront white versus black individuals, allowing them to appreciate the real-world consequences of implicit biases. Police officers are then taught strategies to reduce and manage their biases, and to recognize biases in others. Part of the process is to also help police officers make “unhurried decisions” so they have time to think, strategize, and make appropriate choices. Without metrics, the long-term viability may be questioned, but from the perspective of many participants, it is a big step in the right direction as it acknowledges underlying prejudices that may not have otherwise been realized.

(Al Baker, The New York Times)

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

July 25, 2018 at 5:23 pm

Science Policy Around the Web – February 2, 2018

leave a comment »

By: Michael Tennekoon, PhD

20180202_Linkpost

source: pixabay

Bias in research

Gender bias goes away when grant reviewers focus on the science

Increased scrutiny has been given to the knowledge that there is a lack of senior female faculty in science. Many reasons have been postulated for this, including a lack of appropriate mentoring, a lack of adequate support when balancing family needs, and a general bias in the field. Highlighting the possible impact of bias, a new study from Canada shows that women are rated less favorably than men when reviewers assess the researcher as compared to when reviewers assess the research proposed on a grant application.

To address the issue of gender bias, the Canadian Institutes of Health Research (CIHR) phased out traditional grant programs that focused on both the science and the investigator. Instead, they ran two parallel programs, in which one focused primarily on the applicant’s credentials and the other focused on the science proposed. In addition, reviewers were trained to recognize unconscious biases that may impact the impartiality of their review decisions.

When grant reviewers focused on the quality of the applicant, the success rate for male applications was 4% higher than for female applicants. When grant reviewers instead focused on the quality of the science that was proposed, this gap reduced significantly to 0.9%, a level similar to traditional grant funding programs.

Furthermore, the impact of training reviewers on unconscious bias was of particular interest. Previous work suggested this type of training could exacerbate the situation, however, in this case, training appeared to help the situation by reducing the gap in successful applications between genders. The authors of the current study are planning to further explore the intricacies of how the training could be neutralizing some of the unconscious bias.

While a strength of this study was that it accounted for applicant’s research areas and age, this study was not randomized, which could have impacted the results. For example, there could have been a sample bias based on who chose to apply for the grants, which could have resulted in differences between male and female applicants in various aspects of their applications (such as publication records).

Is this gender bias prevalent in the United States as well?

Somewhat encouragingly, research has shown that males and females are funded at equivalent rates early in their careers by the NIH. Of graver concern, however, is that the research also showed that the number of women who apply for funding drops dramatically as their careers progress. As funding success rates do not appear to contribute to this, other factors, such as a lack of senior role models and mentors, or inadequate support for women that wish to have children and continue working in research, may contribute to the drop in female faculty in research. It is also important to note that biases are not limited to gender, but also exist with race. A 2011 study showed that white researchers are funded at nearly twice the rate of African American researchers despite similar publication and training records. While approaches such as those used by CIHR may help increase representation in senior faculty positions, solutions that tackle systemic biases may be needed to address the full scope of the problem.

(Giorgia Guglielmi, Nature News)

Influence of Social Media

Google’s new ad reckons with the dark side of Silicon Valley’s innovations

Studies have shown a rise in the number of teen suicides and self harm in recent years. There is growing concern that an increase in the amount of time spent on social media is playing a role in increasing the rates of teen suicide. Conscientious of this, Google recently debuted a new advertisement to highlight mental health implications of social media and other modern technology. The advertisement starts off by showing pictures of people sharing happy moments and pictures. However, the advert then pivots to suggest that not everything is as it seems in the perfect pictures, as all the people involved had sought support through the national suicide prevention number at one point in time.

Recent research has raised questions about the effects of social media, in particular with the ‘pressure of perfection’ and the impact it can have on social media users. Specifically, the research shows that users who passively scroll through news feeds could be most susceptible to feelings of unhappiness. In addition to this effect, technology companies are under fire for helping to spread offensive material. A recent pertinent example of this was when Logan Paul documented on YouTube how he discovered a dead body in a Japanese forest known for suicides. Furthermore, after Netflix aired ‘13 reasons why’, a documentary centered on a teen’s suicide, googles searches for suicide methods spiked. Alarmingly it has been documented that searches about suicide are linked to individuals committing suicide.

Large technology companies such as Facebook, Apple, Google and others are making efforts to try to rectify the situation. Facebook has begun using artificial intelligence to recognize suicidal thoughts and connect affected individuals with first responders. However, the capability of these companies to effectively prevent suicides is limited by the essence of what these companies are designed to be. For example, while a Google search asking how to commit suicide will display resources to suicide prevention resources and hotline numbers to the user, the search will also bring up pages detailing the “right way” to commit suicide along with would be instructional videos on YouTube. Clearly these companies still have much work to do when it comes to reducing teen suicide rates.

(Drew Harwell, The Washington Post)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

February 2, 2018 at 1:11 pm

Science Policy Around the Web – June 7, 2016

leave a comment »

By: Thaddeus Davenport, Ph.D.

Amazon Manaus forest” by Phil P Harris. – Own work. Licensed under CC BY-SA 2.5 via Wikimedia Commons.

Conservation Policy

A collaboration between science and religion for ecological conservation

Science has the potential to solve many of the world’s problems, but it may be overly optimistic to think that science alone can cure the world of all that ails it. Climate change and loss of biodiversity threaten humans in a way that we have yet to fully comprehend, and yet these problems emerged not as a result of some mysterious force, but rather because of simple human choices – the collective action (and inaction) of humans over the course of many years. This suggests that the solution to these most grand challenges does not only require scientific breakthroughs. Instead, the solution presents itself to us with a disappointing and somewhat undesirable simplicity: a problem created by humans might also be solved by human cooperation, responsibility, and ownership of our world and our problems. Indeed to tackle the world’s most complex challenges, science and society will need to work together.

Christine A. Scheller reported in March that the American Academy for the Advancement of Science (AAAS) annual meeting featured a dialogue on science, ethics, and religion (DoSER) discussion, which addressed the potential opportunities for collaboration between conservation scientists and religious communities in stemming the loss of biodiversity. The speakers included conservation biologist, Karen Lips, wildlife ecologist, Peyton West, and theologian, William Brown. Lips, the director of the Graduate Program in Sustainable Development and Conservation Biology at the University of Maryland, College Park discussed the decline of amphibious species and noted that while scientists may understand the causes of the problem and potential solutions, the efficacy of any conservation effort will require participation and engagement of those communities where species are going extinct. Similarly, West, the Executive Director of the Frankfurt Zoological Society-U.S. described the important and unique role of religious leaders in shaping the beliefs and behavior of their followers and highlighted the efforts of Catholic, Buddhist, and Islamic leaders to discourage ivory trafficking. Finally, Brown, a Columbia Theological Seminary Professor of the Old Testament observed that nature is represented in the Bible as the dominion of man – a perspective that has been historically “unhelpful” in encouraging conservation. He ended more positively, however, noting that “[m]uch of scripture affirms God’s love for all creation and acknowledges humanity’s vital connection with the nonhuman animal world.”

Science and religion are arguably the two most powerful thought systems in our global society. There is enormous potential to transform our world for the better if we can align the goals of each system toward creating a more just, balanced, healthy world and to identify opportunities for collaboration to achieve these goals. The DoSER program is an exciting forum in which these collaborations may take root. (Christine A. Scheller, AAAS)

Human Genetics

Why try to build a human genome from scratch?

Last week, a group of scientists released a report in the journal Science outlining their goals of building a complete human genome from scratch. This goal was initially discussed in a closed-door meeting, which drew criticism from those concerned about the ethics of such a proposition. The recent report is the product of that meeting and is intended to achieve transparency and to initiate an open discussion on the value, as well as the ethical and practical considerations of such a goal.

The proposed initiative is named “HGP-write” for human genome project – write, to differentiate it from the first, highly fruitful stage of reading the sequence of the human genome (HGP-read), which was completed in 2004. Perhaps in response to their initial criticism, the authors begin the report by acknowledging the ethical questions that will arise over the course of the project and emphasize that they hope to ensure responsible innovation by allocating a portion of research funding to facilitate “inclusive decision-making”. These will likely be valuable discussions with the potential to yield regulatory decisions that should be relevant for emerging gene-editing technologies, such as CRISPR, as well.

The authors go on to say that just as HGP-read produced a significant decrease in the cost of DNA sequencing, one of the goals of HGP-write is to develop technology that will make synthesizing large pieces of DNA faster and cheaper – they cite an optimistic goal of decreasing “the costs of engineering and testing large (0.1 to 100 billion base pairs) genomes in cell lines by over 1000-fold within ten years.”

But how would this technology be applied? The authors provide a number of examples, notably focused on the cell and organ level, including: to facilitate the growth of transplantable human organs in other animals and to engineer cell lines or organoids for cost-efficient vaccine and pharmaceutical development, among others. Additionally, the authors note that this ambitious project would begin by synthesizing small pilot genomes and DNA fragments, and that even these small-scale projects would be of substantial value, for example to synthesize an entire gene locus including associated noncoding DNA may provide insight into the regulatory role of noncoding DNA in gene expression and disease. The project is expected to begin this year with an initial investment of $100 million from a variety of public and private sources, and the authors estimate that in the end the project will cost less than the $3 billion spent during HGP-read.

Without a doubt, there is much good that could come from HGP-write – the ethical debate, the technological advances, a better understanding of the so-called “junk” DNA that makes up the majority of the human genome, and the applications of synthesized genomes. It is an exciting proposition that should be approached carefully and inclusively.

Peer Review Process

Confronting Bias in Peer Review

Humans are unavoidably flawed, and one of our greatest flaws is that each of us carries subtle biases – preconceptions about the world that shape our view and simplify our interaction with an unimaginably complex world. The essential role of peer-review in the scientific endeavor is founded on the assumption that our peers are able to think and make objective assessments of the value and quality of our work, without bias. In a system of thinking and observation that depends entirely on objective, measurable truths, there should be no value placed on who made the observation. Unfortunately, science and decisions about publishing and funding scientific research are exclusively human activities, and thus they are subject to the irrational biases that are so characteristically human.

No one – not even a scientist – is free of bias, and a recent AAAS-sponsored forum sought to highlight the presence of bias in scientific peer-review. Ginger Pinholster wrote about this forum on intrinsic bias in a Science magazine article from May 27th. Pinholster reports that multiple speakers observed that bias in scientific peer-review is not only a problem of fairness.  Geraldine Richmond, the AAAS Board Chair, noted that “unconscious assumptions about gender, ethnicity, disabilities, nationality, and institutions clearly limit the science and technology talent pool and undermine scientific innovation.”

Editors from the New England Journal of Medicine and the American Chemical Society pointed out a US-centric bias in peer-review. Gender bias was discussed as well by Suzanne C. Iacono, head of the Office of Integrative Activities at the National Science Foundation (NSF). Though success rates in grant funding from NSF were similar for men and women in 2014, women submitted only one quarter of the total grant applications. Iacono also noted that success rates for NSF applications submitted by African-American scientists were lower than the overall success rate of submitted applications (18% vs 24%), but more worrisome is the fact that only 2% of the submitted applications were submitted by African-American scientists. Similarly Richard Nakamura, director of the Center for Scientific Review at the National Institutes of Health (NIH) cited that African-American scientists have a success rate of funding from NIH that is approximately half that of white applicants.

While a number of potential interventions to minimize bias were discussed, including double-blind peer-review, it is clear from the relatively small number of funding applications from women and African-Americans that larger structural changes must occur to support and retain women and minority scientists early in their scientific development. The interest of AAAS in studying and addressing problems of bias in scientific peer-review is commendable. Understanding the problem is an important first step and finding a solution will require practice in self-awareness, as well as cooperation between high schools, universities, and finally funding and publishing agencies. (Ginger Pinholster, Science)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

June 7, 2016 at 10:00 am