Science Policy For All

Because science policy affects everyone.

Posts Tagged ‘bias

Bias in Artificial Intelligence

leave a comment »

By: Thomas Dannenhoffer-Lafage, PhD

Image by Geralt from Pixabay

Artificial intelligence (AI) is a nebulous term whose definition has changed over time, but a useful modern definition is “the theory and development of computer systems that are able to performs tasks normally requiring human intelligence.” Modern AI has been applied to a wide variety of problems. Applications of AI include streamlined drug discoveryimage processingtargeted advertisementmedical diagnosis assistancehedge fund investment and robotics, and influences people’s lives more than ever before. These powerful AI systems have allowed for certain tasks to be performed more quickly than by a human, but AI also suffers from a very human deficiency: bias. While bias has a more general meaning in the field of statistics (a closely related field to AI), this article will specifically consider social bias, which exists when a certain group of people are given preference or disinclination.

To understand how AI becomes socially biased, it is critical to understand how certain AI systems are made. The recent explosion of AI breakthroughs is due in part to the increased capabilities of machine-learning algorithms, specifically deep-learning algorithms. Machine-learning algorithms differ from human-designed algorithms in a fundamental way. In a human designed algorithm, a person must provide specific instructions to the computer so that a set of inputs can be changed into outputs. In machine-learning algorithms, a person provides a set of data to an algorithm and that algorithm learns how to perform a specific task given the patterns within the data. This is the “learning” in machine-learning. There are many different types of tasks that a machine-learning algorithm can perform, but all are ultimately reliant on some data set to learn from. Reasons why deep-learning algorithms have become so powerful include the large amounts of data available to train algorithms, the availability of more powerful and inexpensive GPU cards that greatly improve the speed of AI algorithms, and the increased availability of online deep learning source codes.

An AI can be trained to be biased maliciously, but more concerning in that bias can be incorporated into AI unintentionally. Specifically, inadvertent bias can creep into machine-learning based AI in two ways: when bias is inherent to the data and when bias is represented in the tasks that the AI is performing. Inherent bias can occur when the data is not representative of reality, such as certain populations being inadequately represented. For example,  this occurred in a facial recognition software that was trained with more photos of light-skinned faces than dark-skinned faces and was less effective at identifying darker-skinned ones. This resulted in a program that was less effective at identifying dark-skinned faces, leading to errors and misidentifications. Bias can also be inadvertently included into data during featurization. Featurization is a process where sophisticated data is manually modified and proofed before it is presented to an AI to improve its learning rate and task performance. Human agents may unknowingly introduce bias into a dataset while performing featurization. Finally, bias often exists in the task that an algorithm is asked to perform. Tasks given to AI algorithms are usually decided upon for business reasons and questions of fairness are therefore typically not considered. 

When a machine-learning based AI is trained on a dataset that is biased it can lead to serious consequences. For instance, the recently introduced Apple credit card used AI to determine the credit worthiness of applicants. Issues were raised about the validity of this system when Steve Wozniak pointed out that his wife received ten times less credit than him, despite their credit profiles being nearly identical. There have also been issues of bias in AI systems used in school admissions processes and hiring platforms. For example, an AI algorithm was tasked with reviewing application materials for an open job position. It was determined that this algorithm was unknowingly biased against women applicants because of differences in language between male and female applicants. Bias was also an issue in an AI algorithm which was used to determine likelihood of recidivism amongst parolees. This system found that African Americans were at higher risk of recidivism than in reality. Since AI has been trusted with greater decision-making power than it in the past, it has the power to propagate bias at a much greater rate than ever before. 

Even though bias has been a known issue in AI for many years, it still is difficult to fix. One major reason for this is that machine-learning algorithms are designed to take advantage of patterns, or correlations, in data that may be impossible for a human to see to aid in its decision making. This could create problems because AI may use artefacts from certain data as indication that a certain decision should be made. For instance, certain medical imaging equipment may have slightly different image quality or deal with boundary conditions differently, which an AI algorithm may use to determine a diagnosis. Another issue is that, while you may not be directly providing an AI algorithm biased data (e.g. including race, age, etc), the AI may be able to infer it. For instance, you may provide height in a data set which an AI trains on, it may be able to determine if an applicant is male since men, on average, are taller than women. This problem of invisible correlations is compounded by the fact that AI are unable to explain their decisions and backtracking how decisions were made can be impossible. Finally, AI are designed to perform tasks as successful as possible and are not designed to take fairness into account at the design stage. 

Thankfully, different solutions to bias in AI algorithms have been proposed. One possibility is to include fairness metrics as part of the design process of AI. An example of this is known as counterfactual fairness. An algorithm satisfies counterfactual fairness when a prediction based on an individual’s data is the same as counterfactual data where all factors are the same except for the individual’s group membership. However, it has been shown that certain fairness metrics may not be satisfied simultaneously because each of the metrics constrain the decision space too greatly. Another solution is to test AI algorithms before deployment and ensure fairness by ensuring that the rate of false positives and false negatives are equal amongst protected groups. New technologies may also help fight AI bias in the future. One solution is Human-in-the-loop decision making which enables a human agent to review the decisions of an AI machine to fight false positives. New technologies may also help fight AI bias in the future, such explainable AI, which is able to explain its decisions to a human. Other solutions include having groups developing AI engage in fact-based conversations about bias more generally. This would include trainings that identify types of biases, their causes, and their solutions. A push for more diversity in the field of AI is a necessary step because team members who are part of majority groups can disregard differing experiences. Lastly, it is suggested that AI algorithms should be regularly audited to ensure fairness both externally and internally.

The discussion of AI bias within the field has come a long way in the last few years. Companies involved in AI development are now actively involving people whose main role is to fight AI bias. However, most of the regulation of biased AI still occurs internally. This has promted the involvement of actors outside of AI development, including the government and the lawyers, to look at AI bias issues as well. Recently, lawmakers have introduced the Artificial Intelligence Initiative Act which aims to establish means for responsible delivery of AI. The bill calls for NIST to create standards for evaluating AI, including the quality of the training sets. The NSF would be called to make responsible training programs for AI use and training that would address algorithm accountability and data bias. The bill does not propose guidelines or timelines for governmental regulations but rather just creates organizations to advise lawmakers and perform governmental research. Regulations would be imperative if AI were to move away from self-regulation. The decision-making power of AI has also gotten the attention of lawyers in the field of labor and employment, who fear that today’s AI now “have the ability to make legally significant decisions on their own.” Thus, there is more opportunity than ever for technologist to impact AI on the policy level by being involved in governmental organization creating the policy, educating lawmakers involved in AI law and regulations, alerting the public to bias and other issues, and working on industry-specific solutions directly.  

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

January 10, 2020 at 1:58 pm

Science Policy Around the Web – July 27, 2018

leave a comment »

By: Emily Petrus, Ph.D.

20180727_Linkpost

source: pixabay

Innovation

Artificial Intelligence Has a Bias Problem, and It’s Our Fault

While computer and data scientists are working to create systems which can reason and perform complex analysis, groups of ethicists, lawyers and human rights advocates express increasing concerns about the impact artificial intelligence will have on life. It is becoming apparent that human bias regarding race, gender and socioeconomic position also influence algorithms and data sets used to train machine learning software.

Most artificial intelligence (AI) systems are trained on data sets culled from the internet. This results in skewed data which over-represents images and language from the United States. For example, a white woman in a white dress results in algorithms labeling a picture as “bride” or “wedding”, while an image of a North Indian bride is labeled as “performance art”. If that seems like a harmless hiccup, think about algorithms designed to detect skin cancer from images. A recently published study did a decent job detecting dark moles on light skin, but only 5% of the data set depicted dark skinned people, and the algorithm wasn’t even tested on that data set. This bias could skew accurate diagnoses for already underserved minority populations in the United States. Finally, AI will have a huge impact on financial markets beyond the replacement of humans to do jobs, particularly in manufacturing. Decisions on loan eligibility and job candidate hiring decisions are being filtered through AI technology, which is guided by data which may be biased.

It is apparent that computer scientists must make concerted efforts to un-bias data training sets and increase transparency when they develop new AI systems. Unfortunately, these common-sense suggestions are just that: suggestions. Before Obama left office in Fall 2016, a roadmap was created by the administration to guide research and development of AI systems. There’s no teeth in policy dictating fairness and inclusivity in AI development, but private and academic institutions are making gains in this arena. The Human-Centered AI project at Stanford University and Fairness, Accountability, Transparency, and Ethics (FATE) in AI research group at Microsoft are two examples of these types of efforts. Both groups seek to increase inclusivity in AI algorithms and reduce bias – human and computer generated. AI can also be trained to detect biases in both training data and the models by conducting an AI audit. An effort of developers in academia and private industry will be necessary to produce and prove their AI is unbiased, and it is unlikely that federal regulations would have the power or dexterity to administer any concrete regulations regarding this technology. Like most other scientific advances which bring significant monetary gains, the pace is breakneck but corners should not be cut. Legislation is unlikely to be able to keep up with the technology, but incentives to keep the playing field fair should come from within the AI community itself.

(Ben Dickson, PC Mag)

Scientific oversight

NIH delays controversial clinical trials policy for some studies

How does the brain process images of faces? How do we respond to frustrating situations? What does the mind of a sociopath look like in an MRI? These are all basic science questions in brain research which may discover treatment options for future studies. But for the moment, no drugs or interventions are being tested in many basic research labs funded by the National Institutes of Health (NIH). This means they’re not clinical interventions, or by definition, clinical trials, right? Maybe…

Basic researchers studying the healthy human brain sigh a breath of relief as the NIH decided to delay new rules applying to the classification of human trials. At issue is the re-classification of research which can be considered a clinical trial. The intent of the new guidelines was to increase reproducibility and transparency in government funded human research, for example requiring more rigorous statistical practices.  In practice, investigators will be required to upload their studies to clinicaltrials.gov, take mandatory trainings, and produce significantly more paperwork to continue receiving funding for their basic research. In addition, researchers were concerned that this would create more confusion in the public, as their research would be inaccurately represented as a clinical trial.

After the announcement last year, professional societies and academics sent letters of complaint to NIH, prompting congress to delay the implementation of the requirements to September 2019. This delay also gives leniency to basic researchers who apply to funding opportunity announcements seeking studies labeled as clinical trials, meaning they would not be immediately disqualified from being scored. Although many researchers hoped the NIH would drop all requirements for basic research, the delay is welcome for now. “This delay is progress because it gives them more time to get it right, and in the interim people aren’t going to be in trouble if they get it wrong,” said Jeremy Wolfe, a cognitive psychologist at Harvard Medical School.

(Jocelyn Kaiser, Science)

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

July 27, 2018 at 4:51 pm

Posted in Linkposts

Tagged with , , , ,

Science Policy Around the Web – July 24, 2018

leave a comment »

By: Janani Prabhakar, Ph.D.

20180724_Linkpost

source: pixabay

The Scientific Workforce

Has the tide turned towards responsible metrics in research?

Quantifying progress and success in science has long been a challenging task. What is the correct metric to use? Mathematics can provide insight on the general impact of journals or calculate the productivity based on publication rate of an individual researcher. But, predictive statistics are much less common. Predictive statistics and machine learning approaches are used in other industries and sectors quite often. For example, as this article points out, predictive statistics and modeling are used in baseball to identify new talent. Why not in academia? There are private companies that provide such statistics like Academic Analytics. They offer a version of multiple existing metrics to measure potential success, including citations, H-indices, impact factors, and grant income. The desire for such statistics come from those making hiring decisions within academia to policymakers when making budgetary decisions. The need for quantifying potential success is apparent, but when, where, and how is still in hot debate, as well as how to do so ethically. The San Francisco Declaration on Research Assessment (Dora) called for an end to using journal impact factors in funding and hiring decisions, and focus on other metrics instead. UK has made some strides to ensure that any metrics used still cohere to the principles of science and reflect ‘responsible metrics.’ These metrics include robustness, humility, transparency, diversity, and reflexivity. A recent report evaluates the success of these metrics and implementations over the last five years. Out of 96 UK universities and research organizations, 21 have already agreed to follow these metrics. Some universities have begun to implement their own policies beyond those outlined in Dora. This has led to increasing data on good metrics and practices for Universities to begin to shift policy to improve the academic environment, reduce abuse, and to employ responsible management practices. This data is a great resource for making change in Universities at the global level.

(James Wilsdon, The Guardian)

Psychology

Confronting Implicit Bias in the New York Police Department

After a long history of police brutality towards black men, the role of racial bias has fallen front and center in the American dialogue. Implicit bias reflects the kinds of biases that are unintentional, unconscious, and more pervasive than racial bias alone. Erasing such biases requires overcoming one’s own stereotypes and using facts to make rational decisions. As part of Mayor Bill de Blasio’s police reform efforts, a training program on implicit bias will run through next year, conducted by Fair and Impartial Policing, a Florida company that provides such training programs for many police departments. While this program will cost the city $4.5 million, there is no data, yet, to assess the training’s effectiveness. The lack of objective data is troubling to policy makers and researchers, given the spread of this training across many police departments. Dr. Patricia G. Devine, a professor at the University of Wisconsin, has stated that we need to first know more about officers’ unintentional biases to determine whether the training has a significant effect. Furthermore, the longevity of the training effects need to be determined both in terms of changes in officer behaviors as well as the extent to which the community has benefitted. Despite the lack of such data, feedback from trainers suggests that over the course of the training period, initial hesitance in police officers turns to a better appreciation for the role of stereotypes in action selection. For many police officers, the training is an opportunity to reflect upon their own behaviors and make meaning out of them from the perspective of their own tendencies to racially stereotype. The training isn’t meant to cure officers of their biases, but rather to help them confront and manage their own biases. Police officers are shown case studies of situations where biases result in differences in the way police officers confront white versus black individuals, allowing them to appreciate the real-world consequences of implicit biases. Police officers are then taught strategies to reduce and manage their biases, and to recognize biases in others. Part of the process is to also help police officers make “unhurried decisions” so they have time to think, strategize, and make appropriate choices. Without metrics, the long-term viability may be questioned, but from the perspective of many participants, it is a big step in the right direction as it acknowledges underlying prejudices that may not have otherwise been realized.

(Al Baker, The New York Times)

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

July 25, 2018 at 5:23 pm

Science Policy Around the Web – February 2, 2018

leave a comment »

By: Michael Tennekoon, PhD

20180202_Linkpost

source: pixabay

Bias in research

Gender bias goes away when grant reviewers focus on the science

Increased scrutiny has been given to the knowledge that there is a lack of senior female faculty in science. Many reasons have been postulated for this, including a lack of appropriate mentoring, a lack of adequate support when balancing family needs, and a general bias in the field. Highlighting the possible impact of bias, a new study from Canada shows that women are rated less favorably than men when reviewers assess the researcher as compared to when reviewers assess the research proposed on a grant application.

To address the issue of gender bias, the Canadian Institutes of Health Research (CIHR) phased out traditional grant programs that focused on both the science and the investigator. Instead, they ran two parallel programs, in which one focused primarily on the applicant’s credentials and the other focused on the science proposed. In addition, reviewers were trained to recognize unconscious biases that may impact the impartiality of their review decisions.

When grant reviewers focused on the quality of the applicant, the success rate for male applications was 4% higher than for female applicants. When grant reviewers instead focused on the quality of the science that was proposed, this gap reduced significantly to 0.9%, a level similar to traditional grant funding programs.

Furthermore, the impact of training reviewers on unconscious bias was of particular interest. Previous work suggested this type of training could exacerbate the situation, however, in this case, training appeared to help the situation by reducing the gap in successful applications between genders. The authors of the current study are planning to further explore the intricacies of how the training could be neutralizing some of the unconscious bias.

While a strength of this study was that it accounted for applicant’s research areas and age, this study was not randomized, which could have impacted the results. For example, there could have been a sample bias based on who chose to apply for the grants, which could have resulted in differences between male and female applicants in various aspects of their applications (such as publication records).

Is this gender bias prevalent in the United States as well?

Somewhat encouragingly, research has shown that males and females are funded at equivalent rates early in their careers by the NIH. Of graver concern, however, is that the research also showed that the number of women who apply for funding drops dramatically as their careers progress. As funding success rates do not appear to contribute to this, other factors, such as a lack of senior role models and mentors, or inadequate support for women that wish to have children and continue working in research, may contribute to the drop in female faculty in research. It is also important to note that biases are not limited to gender, but also exist with race. A 2011 study showed that white researchers are funded at nearly twice the rate of African American researchers despite similar publication and training records. While approaches such as those used by CIHR may help increase representation in senior faculty positions, solutions that tackle systemic biases may be needed to address the full scope of the problem.

(Giorgia Guglielmi, Nature News)

Influence of Social Media

Google’s new ad reckons with the dark side of Silicon Valley’s innovations

Studies have shown a rise in the number of teen suicides and self harm in recent years. There is growing concern that an increase in the amount of time spent on social media is playing a role in increasing the rates of teen suicide. Conscientious of this, Google recently debuted a new advertisement to highlight mental health implications of social media and other modern technology. The advertisement starts off by showing pictures of people sharing happy moments and pictures. However, the advert then pivots to suggest that not everything is as it seems in the perfect pictures, as all the people involved had sought support through the national suicide prevention number at one point in time.

Recent research has raised questions about the effects of social media, in particular with the ‘pressure of perfection’ and the impact it can have on social media users. Specifically, the research shows that users who passively scroll through news feeds could be most susceptible to feelings of unhappiness. In addition to this effect, technology companies are under fire for helping to spread offensive material. A recent pertinent example of this was when Logan Paul documented on YouTube how he discovered a dead body in a Japanese forest known for suicides. Furthermore, after Netflix aired ‘13 reasons why’, a documentary centered on a teen’s suicide, googles searches for suicide methods spiked. Alarmingly it has been documented that searches about suicide are linked to individuals committing suicide.

Large technology companies such as Facebook, Apple, Google and others are making efforts to try to rectify the situation. Facebook has begun using artificial intelligence to recognize suicidal thoughts and connect affected individuals with first responders. However, the capability of these companies to effectively prevent suicides is limited by the essence of what these companies are designed to be. For example, while a Google search asking how to commit suicide will display resources to suicide prevention resources and hotline numbers to the user, the search will also bring up pages detailing the “right way” to commit suicide along with would be instructional videos on YouTube. Clearly these companies still have much work to do when it comes to reducing teen suicide rates.

(Drew Harwell, The Washington Post)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

February 2, 2018 at 1:11 pm

Science Policy Around the Web – June 7, 2016

leave a comment »

By: Thaddeus Davenport, Ph.D.

Amazon Manaus forest” by Phil P Harris. – Own work. Licensed under CC BY-SA 2.5 via Wikimedia Commons.

Conservation Policy

A collaboration between science and religion for ecological conservation

Science has the potential to solve many of the world’s problems, but it may be overly optimistic to think that science alone can cure the world of all that ails it. Climate change and loss of biodiversity threaten humans in a way that we have yet to fully comprehend, and yet these problems emerged not as a result of some mysterious force, but rather because of simple human choices – the collective action (and inaction) of humans over the course of many years. This suggests that the solution to these most grand challenges does not only require scientific breakthroughs. Instead, the solution presents itself to us with a disappointing and somewhat undesirable simplicity: a problem created by humans might also be solved by human cooperation, responsibility, and ownership of our world and our problems. Indeed to tackle the world’s most complex challenges, science and society will need to work together.

Christine A. Scheller reported in March that the American Academy for the Advancement of Science (AAAS) annual meeting featured a dialogue on science, ethics, and religion (DoSER) discussion, which addressed the potential opportunities for collaboration between conservation scientists and religious communities in stemming the loss of biodiversity. The speakers included conservation biologist, Karen Lips, wildlife ecologist, Peyton West, and theologian, William Brown. Lips, the director of the Graduate Program in Sustainable Development and Conservation Biology at the University of Maryland, College Park discussed the decline of amphibious species and noted that while scientists may understand the causes of the problem and potential solutions, the efficacy of any conservation effort will require participation and engagement of those communities where species are going extinct. Similarly, West, the Executive Director of the Frankfurt Zoological Society-U.S. described the important and unique role of religious leaders in shaping the beliefs and behavior of their followers and highlighted the efforts of Catholic, Buddhist, and Islamic leaders to discourage ivory trafficking. Finally, Brown, a Columbia Theological Seminary Professor of the Old Testament observed that nature is represented in the Bible as the dominion of man – a perspective that has been historically “unhelpful” in encouraging conservation. He ended more positively, however, noting that “[m]uch of scripture affirms God’s love for all creation and acknowledges humanity’s vital connection with the nonhuman animal world.”

Science and religion are arguably the two most powerful thought systems in our global society. There is enormous potential to transform our world for the better if we can align the goals of each system toward creating a more just, balanced, healthy world and to identify opportunities for collaboration to achieve these goals. The DoSER program is an exciting forum in which these collaborations may take root. (Christine A. Scheller, AAAS)

Human Genetics

Why try to build a human genome from scratch?

Last week, a group of scientists released a report in the journal Science outlining their goals of building a complete human genome from scratch. This goal was initially discussed in a closed-door meeting, which drew criticism from those concerned about the ethics of such a proposition. The recent report is the product of that meeting and is intended to achieve transparency and to initiate an open discussion on the value, as well as the ethical and practical considerations of such a goal.

The proposed initiative is named “HGP-write” for human genome project – write, to differentiate it from the first, highly fruitful stage of reading the sequence of the human genome (HGP-read), which was completed in 2004. Perhaps in response to their initial criticism, the authors begin the report by acknowledging the ethical questions that will arise over the course of the project and emphasize that they hope to ensure responsible innovation by allocating a portion of research funding to facilitate “inclusive decision-making”. These will likely be valuable discussions with the potential to yield regulatory decisions that should be relevant for emerging gene-editing technologies, such as CRISPR, as well.

The authors go on to say that just as HGP-read produced a significant decrease in the cost of DNA sequencing, one of the goals of HGP-write is to develop technology that will make synthesizing large pieces of DNA faster and cheaper – they cite an optimistic goal of decreasing “the costs of engineering and testing large (0.1 to 100 billion base pairs) genomes in cell lines by over 1000-fold within ten years.”

But how would this technology be applied? The authors provide a number of examples, notably focused on the cell and organ level, including: to facilitate the growth of transplantable human organs in other animals and to engineer cell lines or organoids for cost-efficient vaccine and pharmaceutical development, among others. Additionally, the authors note that this ambitious project would begin by synthesizing small pilot genomes and DNA fragments, and that even these small-scale projects would be of substantial value, for example to synthesize an entire gene locus including associated noncoding DNA may provide insight into the regulatory role of noncoding DNA in gene expression and disease. The project is expected to begin this year with an initial investment of $100 million from a variety of public and private sources, and the authors estimate that in the end the project will cost less than the $3 billion spent during HGP-read.

Without a doubt, there is much good that could come from HGP-write – the ethical debate, the technological advances, a better understanding of the so-called “junk” DNA that makes up the majority of the human genome, and the applications of synthesized genomes. It is an exciting proposition that should be approached carefully and inclusively.

Peer Review Process

Confronting Bias in Peer Review

Humans are unavoidably flawed, and one of our greatest flaws is that each of us carries subtle biases – preconceptions about the world that shape our view and simplify our interaction with an unimaginably complex world. The essential role of peer-review in the scientific endeavor is founded on the assumption that our peers are able to think and make objective assessments of the value and quality of our work, without bias. In a system of thinking and observation that depends entirely on objective, measurable truths, there should be no value placed on who made the observation. Unfortunately, science and decisions about publishing and funding scientific research are exclusively human activities, and thus they are subject to the irrational biases that are so characteristically human.

No one – not even a scientist – is free of bias, and a recent AAAS-sponsored forum sought to highlight the presence of bias in scientific peer-review. Ginger Pinholster wrote about this forum on intrinsic bias in a Science magazine article from May 27th. Pinholster reports that multiple speakers observed that bias in scientific peer-review is not only a problem of fairness.  Geraldine Richmond, the AAAS Board Chair, noted that “unconscious assumptions about gender, ethnicity, disabilities, nationality, and institutions clearly limit the science and technology talent pool and undermine scientific innovation.”

Editors from the New England Journal of Medicine and the American Chemical Society pointed out a US-centric bias in peer-review. Gender bias was discussed as well by Suzanne C. Iacono, head of the Office of Integrative Activities at the National Science Foundation (NSF). Though success rates in grant funding from NSF were similar for men and women in 2014, women submitted only one quarter of the total grant applications. Iacono also noted that success rates for NSF applications submitted by African-American scientists were lower than the overall success rate of submitted applications (18% vs 24%), but more worrisome is the fact that only 2% of the submitted applications were submitted by African-American scientists. Similarly Richard Nakamura, director of the Center for Scientific Review at the National Institutes of Health (NIH) cited that African-American scientists have a success rate of funding from NIH that is approximately half that of white applicants.

While a number of potential interventions to minimize bias were discussed, including double-blind peer-review, it is clear from the relatively small number of funding applications from women and African-Americans that larger structural changes must occur to support and retain women and minority scientists early in their scientific development. The interest of AAAS in studying and addressing problems of bias in scientific peer-review is commendable. Understanding the problem is an important first step and finding a solution will require practice in self-awareness, as well as cooperation between high schools, universities, and finally funding and publishing agencies. (Ginger Pinholster, Science)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

June 7, 2016 at 10:00 am