Science Policy For All

Because science policy affects everyone.

Posts Tagged ‘ethics

Science Policy Around the Web – April 24, 2019

leave a comment »

By: Patrick Wright, PhD

Image by mohamed Hassan from Pixabay 

Why Some Anti-bias Training Misses the Mark

A new study published in the Proceedings of the National Academy of Sciences(PNAS) entitled “The mixed effects of online diversity training” reports that online diversity-training programs aimed at reducing gender and racial bias among employees do not substantially affect workplace behavior, particularly among male employees.

The study cohort consisted of 3,016 volunteers (61.5% men) that were all salaried employees across 63 nations of a single global professional-services business. Each participant was randomly assigned to one of three anti-bias sessions: gender bias training, general-bias training, and a control group that received no bias-specific training. Training for the treatment conditions was divided into five sections, including “What are [gender] stereotypes and why do they matter?” and “How can we overcome [gender] stereotypes?” (the word “gender” was excluded from general-bias training sessions). On the other hand, the control condition contained sections such as “Why is inclusive leadership important?” and “What makes teams more inclusive?”; bias nor stereotyping were ever explicitly mentioned. 

Authors acquired data on attitudinal shifts and behavioral changes for up to five months after the training. All volunteers were asked to complete a follow-up survey to help address inequalities that women and racial minorities face in the workplace. Additionally, one a week for 12 weeks after completion of training, employees were texts that included such comments as “Have you used any inclusive leadership strategies this week? Respond Y for Yes and N for No”

Interestingly, authors observed no positive shifts in behavior among male volunteers. Only members of groups that are commonly impacted by bias (e.g. under-represented minorities) were observed to change their behavior. Lead author Edward Chang summarized this finding: “The groups that historically have had more power – white people and men – didn’t move much”. Women volunteers who participated in the training sought mentorship from senior colleagues and offered mentorship to junior female colleagues after the sessions. 

Chester Spell, a Professor of Management in at the Rutgers School of Business in Camden, New Jersey who studies behavioral and psychological health in organizations, believes that for diversity training to be truly impactful, it “has to be part of the DNA of an organization, not an appendix.” Organizations must show that they are serious about fighting bias through a committing to offering many initiatives aimed at educating about the presence and effects bias. Recently, in Spring of 2018, Starbucks closed 8,000 stores on a Tuesday afternoon for a four-hour anti-bias training, specifically racial tolerance, for employees This was in response to a prior incident in which a Philadelphia-area Starbucks café manager call to police resulted in the arrests of two black men who were in the café waiting for a friend. However, Starbucks did not comment on future training plans. 

The most effective means of implementation for anti-bias training plans are still not established. This is an active area of ongoing area of research, especially regarding the idea delivery method and number of sessions. Bezrukova et al, described in a 2016 meta-analysis spanning 40 years on the impact of diversity training, observed little effect of stand-alone diversity trainings on employees’ attitudes toward bias. Offering repeated or longer training sessions that are complemented with other approaches, including deciding hiring criteria prior to candidate evaluation, may be the best approaches going forward. However, individuals in academia have a more favorable opinion and are more receptive of these trainings than those from the business sector. Ülger and colleagues reported in a meta-analytic review across 50 studies of in-school interventions on attitudes toward outgroup members ((members of different ethnic, religious, age groups etc.) that statistically significant, moderate changes in outgroup attitudes can be obtained via anti-bias programs in school. However, there was no evidence that teacher-led or media-based interventions produce positive outcomes compared to the positive outcomes achieved by researcher-led interventions. Notably, one-on-one interventions were the most impactful. 

 (Virginia Gewin, Nature)

Universities Will Soon Announce Action Against Scientists Who Broke NIH Rules, Agency Head Says

During a recent Senate Appropriations Subcommittee hearing in early April, Dr. Francis Collins, Director of the National Institutes of Health (NIH), described that over the rest of the month, many universities will announce action against faculty members who did not comply with agency rules on protecting the confidentiality of peer review, handling intellectual property, and disclosing foreign ties. Dr. Collins told Senator Roy Blunt (R-MO), chair of the subcommittee, that there are ongoing investigations at more than 55 U.S. institutions and that some scientists have been deemed guilty of not disclosing foreign funding for work that was also being supporting by NIH. 

The push to systematically uncover potential violations of these intellectual property and confidentiality rules began in August 2018, when Dr. Collins wrote the 10,000 institutions receiving NIH funding to request them to look for any instances of concerning behavior. Dr. Collins spoke of faculty researchers already being fired: “There are increasing instances where faculty have been fired, have been asked to leave the institution, many of them returning back to their previous foreign base.” For example, the MD Anderson Cancer center, part of the University of Texas system, announced last week that they have fired three senior researchers that committed potentially “serious” violations of rules involving confidentiality of peer review and foreign ties disclosure after they were identified by NIH. 

However, both Dr. Collins and Senator Blunt emphasized that this is not a pervasive problem; most foreign scientists working in the United States and funded by the NIH follow funding and disclosure rules. “We need to be careful that we don’t step into something that almost seems a little like racial profiling.”, Dr. Collins stated at the hearing. 

 (Jocelyn Kaiser, Science)



Have an interesting science policy link? Share it in the comments!

Advertisements

Written by sciencepolicyforall

April 25, 2019 at 10:06 am

Science Policy Around the Web – April 19, 2019

leave a comment »

By: Neetu Gulati, PhD

Image by Raman Oza from Pixabay 

Scientists Restore Some Function in the Brains of Dead Pigs 

Hours after the animals were killed, scientists have partially revived the brains of dead pigs. This contradicts the dogma surrounding death. Cutoff from oxygen, the brain of a mammal is supposed to die after about 15 minutes. The process was thought to be widespread and irreversible: after the cells in the brain die they cannot be brought back. A study published in Nature has challenged this dogma. While none of the tested brains regained signs of consciousness, the Yale researchers were able to demonstrate that cellular function was either preserved or restored.

The study used 32 brains from pigs that were slaughtered for food. After waiting four hours, well past the 15 minutes of oxygen deprivation needed to “kill” the brain, they hooked them up to a system that pumped in a cocktail of specially formulated nutrients and chemicals called BrainEx for six hours. Compared to brains not given the BrainEx, the treated brains had more preserved structure and less cell death, and some cellular functions were restored. Nevertheless, Nenad Sestan, the lead researcher on the project, was quick to point out that while the brains had some restored activity, “this is not a living brain.”

In fact, the goal of the study was not to restore consciousness, which could lead to many ethical concerns. The scientists monitored electrical activity in the brains and intended to stop any signs of consciousness that may have been detected. Stephen Latham, a bioethicist that worked with the team explained that they would need more ethical guidance before trying any studies that altered consciousness in the pigs brain. To avoid this, the BrainEx cocktail also included a drug known to dampen neuronal activity.

The implications of this study are vast. The breakthrough will hopefully create a better link between basic neuroscience and clinical research, but also even with the ethical considerations, it is likely that people will eventually want to apply this technology to human brains. It may lead to interesting policy discussions, because currently, while there are many restrictions on what can be done with living research animals or human subjects, there are much fewer restrictions on the dead. It may also affect organ transplantation efforts from brain-dead individuals, as they may eventually become candidates for brain revival. A lot still needs to be investigated in the meantime, but the implications are vast and mind-blowing.

(Nell Greenfieldboyce, NPR)

Darkness Visible, Finally: Astronomers Capture First Ever Image of a Black Hole

Last week it was announced that scientists had captured the image of the shadow of a black hole for the first time in history. The image is the result of an international collaboration consisting of 200 members of the Event Horizon Telescope team. The results were simultaneously announced at news conferences in six locations around the world, including at the National Science Foundation

The data was collected over a 10-day period around the world using eight telescopes, focused on Messier 87 (M87), a giant galaxy within the constellation Virgo. It is within M87 that a black hole billions of times larger than the sun was visualized. After collecting data, it took two years of computer analysis to produce the blurry image of a lopsided ring of light around a dark circle.

Black holes like the one found in M87 are supermassive dense objects that gravity pulls so strongly that no matter can escape. According to Einstein’s principles of general relativity, the collapse of space-time within a black hole can even prevent light from escaping. The first official proof of the existence of black holes came in 2016 when LIGO detected the collision of a pair of black holes. Now, merely three years later, the world has photographic evidence, and features of the black hole can be determined, including its mass: 6.5 solar masses, heavier than most pervious determinations.

Moving forward, the Event Horizon Telescope partnership plans to continue observations of M87 and collect data of other regions of space. The telescope network also continues to expand: earlier this year another telescope was added to the collaboration, with more antennas also expected to join soon. The collaboration will continue to observe black holes and monitor their behavior to see how things change.

(Dennis Overbye, New York Times


Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

April 21, 2019 at 12:10 pm

The need for regulation of artificial intelligence

leave a comment »

By: Jayasai Rajagopal, Ph.D.


Source: Wikimedia

The development and improvement of artificial intelligence (AI) portends change and revolution in many fields. A quick glance at the Wikipedia article on applications of artificial intelligence highlights the breadth of fields that have already been affected by these developments: healthcare, marketing, finance, music and may others. As these algorithms increase their complexity and grow in their ability to solve more diverse problems, the need to define rules by which AI is developed becomes more and more important.

            Before explaining the potential pitfalls of AI, a brief explanation of the technology is required. Attempting to define artificial intelligence begs the question of what is meant by intelligence in the first place. Poole, Mackworth and Goebel clarify that for an agent to be considered intelligent, they must adapt to their surrounding circumstances, learn from changes in those circumstances, and apply that experience in pursuit of a particular goal. A machine that is able to adapt to changing parameters, adjust its programming, and continue to pursue a specified directive is an example of artificial intelligence. While such simulacra are found throughout science fiction, dating back to Mary Shelly’s Frankenstein, they are a more recent phenomenon in the real world. 

            Development of AI technology has taken off within the last few decades, as computer processing power has increased. Computers began successfully competing against humans in chess as early as 1997 with DeepBlue’s victory over Garry Kasparov. In recent years, computers have started to earn victories in even more complex games such as Go and even video games such as Dota 2. Artificial intelligence programs have become common place for many companies which use them to monitor their products and improve the performance of their services. A report in 2017 found that one in five companies employed some form of AI in their workings. Such applications are only going to become more commonplace in the future.

In the healthcare field, the prominence of AI is readily visible. A report by BGV predicted a total of $6.6 billion invested into AI within healthcare by the year of 2021. Accenture found that this could lead to saving of up to $150 billion by 2026. With the recent push towards personalized and precision medicine, AI can greatly improve the treatment and quality of care. 

However, there are pitfalls associated with AI. At the forefront, AI poses a potential risk for abuse by bad actors. Companies and websites are frequently reported in the news for being hacked and losing customer’s personal information. The 2017 WannaCry attack crippled the UK’s healthcare system, as regular operations at many institutions were halted due to their compromised data infrastructures. While cyberdefenses will evolve with the use of AI, there is a legitimate fear that bad actors could just as easily utilize AI in their attacks. Regulation of use and development of AI can limit the number of such actors that could access those technologies.

Another concern with AI is the privacy question associated with the amount of data required. Neural networks, which seek to imitate the neurological processing of the human brain, require large amounts of data to reliably generate their conclusions. Such large amounts of data need to be curated carefully to make sure that identifying information that could compromise the privacy of citizens is not easily divulged. Additionally, data mining and other AI algorithms could information that individuals may not want revealed. In 2012, a coupon suggestion algorithm used by Target was able to discern the probability that some of their shoppers were pregnant. This proved problematic for one teenager, whose father wanted to know why Target was sending his daughter coupons for maternity clothes and baby cribs. As with the cyberwarfare concern, regulation is a critical component in protecting the privacy of citizens.

Finally, in some fields including healthcare, there is an ever present concern that artificial intelligence may replace some operations entirely. For example, in radiology, there is a fear that improvements in image analysis and computer-aided diagnosis by the use of neural networks could replace clinicians. For the healthcare field in particular, this raises several important ethical questions. What if the diagnosis of an algorithm disagrees with a clinician? As the knowledge an algorithm has is limited by the information it is exposed to, how will it react when a unique case is presented? From this perspective, regulation of AI is important not only to address practical concerns, but also pre-emptively answer ethical questions.

While regulation as strict as the Asmiov’s Three Laws may not be required, a more uniform set of rules governing AI is required. At the international level, there is much debate among the members of the United Nations as to how to address the issue of cyber security. Other organizations, such as the European Union, have made more progress. A document recently released by the EU highlights some ethical guidelines which may serve as the foundation for future regulations. At the domestic level, there has been a push from scientists and leaders in the field towards harnessing the development of artificial intelligence for the good of all. In particular, significant headway has been made in the regulation of self-driving cars. Laws passed in California restrict how the cars can be tested and by 2014, four states already had legislation applying to these kinds of cars. 

Moreover, the FDA recently released a statement expressing their approach to the regulation of artificial intelligence in the context of medical devices. At the time of this writing, there is a discussion paper that is open for commentary describing the proposed approach that the FDA may take. They note that the conventional methods of acquiring pre-market clearance for devices may not apply to artificial intelligence. The newly proposed framework adapts existing practices to the context of software improvements.  

Regulation must also be handled with care. Over-limitation of the use and research in artificial intelligence could lead to stifling of development. Laws must be made with knowledge of the potential benefits of new technological advancements could cause. As noted by Gurkaynak, Yilmaz, and Haksever, lawmakers must strike a balance between preserving the interests of humanity and the benefits of technological improvement. Indeed, artificial intelligence poses many challenges for legal scholars.

In the end, artificial intelligence is an exciting technological development that can change the way we go about our daily business. With proper regulation, legislation, and research focus, this technology can be harnessed in a way that benefits the human experience while preserving development and the security of persons.

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

April 18, 2019 at 2:25 pm

Science Policy Around the Web – November 20, 2018

leave a comment »

By: Andrew Wright, B.S.

NASAburningbrazil

Source: Wikimedia

 

Habitat Loss Threatens All Our Futures, World Leaders Warned

Recent reports have suggested that humanity has only 12 years to avoid catastrophic environmental collapse due to 1.5C of industrial warming. While solutions to the threat of runaway climate change have been given a new sense of urgency by these findings, there exists a commensurate and oft less visited issue, that of rapid declines in global biological diversity. Driven primarily by agricultural land conversion of terrestrial and marine ecosystems (via forest clearing and river damming, respectively), vertebrate species have declined by 60% on average since 1970 according to the World Wildlife Fund’s most recent Living Planet Report . While this decline appears strongest in South and Central America and in freshwater habitats, the report joins a compendium of literature suggesting holistic declines in biodiversity among birds, insects, fish, and terrestrial vertebrates as part of an ongoing anthropogenic mass extinction event.

To address some of the issue, the UN Convention on Biological Diversity (CBD) is currently meeting in Sharm El Sheikh, Egypt to discuss progress on the Aichi biodiversity targets for 2020.  These targets came out of The Convention on Biological Diversity, a multilateral treaty signed in 1992 focused on preserving biodiversity, sustainable use of biological resources, and equitable sharing of resources. The Aichi biodiversity targets specified that people would be aware of risks to biodiversity, and that biodiversity values would be adopted by public, private, and governmental entities by 2020. Given the rapidity, intensity, and ubiquity of the decline in species, most, if not all, of these targets will likely be missed. As such, the delegates from the 196 signatory nations will also work on creating new biodiversity targets to be finalized at the next CBD meeting in China.

Since a comprehensive solution seems necessary given the increasingly global nature of trade, authors of the new targets hope to garner a greater deal of international attention, and intend to make the case that government commitment to reversing or pausing biodiversity loss should receive equivalent weight as action on climate change.

(Jonathan Watts, The Guardian)

The Ethical Quandary of Human Infection Studies

The United States has greatly improved its ability to soundly regulate the ethics of clinical studies since the infamous malfeasance of the Tuskegee syphilis study. Most significantly, the National Research Act of 1974 established the Institutional Review Board to address how to adequately regulate the use of human subjects by adhering to the principles of respect for persons, beneficence, and justice.

The National Research Act provided a substantial step forward and provided a clear requirement for universal informed consent. However, the expansion of clinical studies to new international regions of extreme poverty, due in part to the influx of private money from large charitable organizations, has come with novel ethical considerations. In these newly explored populations where income, education, and literacy levels may be lower, emphasis is now being place on how to recruit volunteers without implicitly taking advantage of their circumstances.

One area of concern is compensation levels. While compensation in a malaria infection study in Kenya was tied to the local minimum wage, the number of volunteers recruited far surpassed expectations. This may have been due to the fact that payment during this study was guaranteed and consistent, in contrast to local work.

Aware of the concern, two of the largest private medical research funding organizations, the Bill and Melinda Gates foundation and the Wellcome Trust have recently instituted ethical guidelines putatively reinforcing the principle of beneficence, placing special emphasis on maximizing benefits over risk. It is an open question whether these protections will be sufficient, but at the very least it is important that rules to be put in place proactively rather than as a reaction.

 

(Linda Nordling, Undark/Scientific American)

 

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

November 20, 2018 at 11:58 am

Science Policy Around the Web – October 19, 2018

leave a comment »

By: Ben Wolfson, Ph.D.

swamp-718456_1280

Source: Pixabay

Climate Change

 

Climate Change prompts a rethink of Everglades management

The Florida Everglades is a large area of tropical wetlands that has received significant attention due to the degradation of its unique ecosystem by urban development. The Everglades were designated a World Heritage Sitein 1979 and Wetland Area of Global Importancein 1987, and in 2000 Congress approved the Comprehensive Everglades Restorative Plan (CERP) to combat further decline and provide a framework for Everglades restoration.

For the past 18 years, these efforts have been directed towards curtailing damage from urbanization and pollution. However, as outlined in a congressionally mandated report released on October 16th by the National Academies of Science, Engineering, and Medicine, new strategies may be necessary. In the biennial progress report, an expert panel called for CERP managers to reassess their plans in light of new climate change models. The report focuses on the 7 centimeters of sea level rise seen since 2000, and points out that Southern Florida is especially at risk from climate change and is expected to experience a 0.8-meter rise in sea level by the year 2100.

It is clear that as more is learned about the realities of climate change, the goals and methods of conservation projects are shifting, and past strategies must be adapted to fit the realities of a warming world.

(Richard Blaustein, Science)

Animal Research

NIH announces plan for chimp retirement

 

In 2015, the NIH announced that it would no longer support biomedical research on chimpanzees, two years after pledging to significantly reduce the numbers of chimpanzees used in research. These decisions were made based on a combination of reduced demand for chimpanzees in research and the designation of captured chimpanzees as an endangered species in 2015.

On Thursday October 18th, the NIH announced the next step in the process of retiring research chimps. While research was stopped in 2015, many of the chimpanzees had nowhere to go and remained housed at laboratories. One federal chimpanzee sanctuary, Chimp Haven, exists in Keithville, Louisiana, however lack of space and the difficulty of relocating some animals has slowed their transition to better habitats.

In the Thursday announcement NIH director Francis Collins outlined the guidelines for future chimpanzee relocation. These include streamlining medical records and determining whether chimpanzees are physical healthy enough to be relocated. Many of the chimpanzees are at an advanced age, meaning they have developed chronic illnesses similar to those experienced by humans. However, Collin’s emphasized that there must be a more acute medical problem for relocation not to take place. In addition both the research facility and Chimp Haven must agree that the former research chimpanzees are capable of being relocated, and disagreements will be mediated by a panel of outside veterinarians.

Collins additionally stressed that while transfer to Chimp Haven is the ideal outcome for all retired chimps, those housed at NIH-supported facilities do not live isolated in cages or in laboratories and are housed in social groups with appropriate species-specific accommodations.

The development of these clear guidelines will expediate chimpanzee relocation while emphasizing chimpanzee health and comfort.

(Ike Swetlitz, Statnews)

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

October 19, 2018 at 3:25 pm

Science Policy Around the Web – May 1, 2018

leave a comment »

By: Liu-Ya Tang, PhD

20180501_Linkpost

source: pixabay

Artificial Intelligence

With €1.5 billion for artificial intelligence research, Europe pins hopes on ethics

While artificial intelligence (AI) brings convenience to modern life, it may cause some ethical issues. For example, AI systems are generated through machine learning. Systems usually have a training phase in which scientists “feed” them existing data and they “learn” to draw conclusions from that input. If the training dataset is biased, the AI system would produce a biased result. To put ethical guidelines on AI development and catch up with the United States and China in AI research, the European Commission announced on April 25 that it would spend €1.5 billion to AI research and innovation until 2020.

Although the United States and China have made great advances in the field, the ethical issues stemming from AI may have been neglected as both practice “permissionless innovation”, said Eleonore Pauwels, a Belgian ethics researcher at the United Nations University in New York City. She spoke highly of Europe’s plan, which is expected to enhance fairness, transparency, privacy and trust. But the outcome is still unknown. As said by Bernhard Schölkopf, a machine learning researcher at the Max Planck Institute for Intelligent Systems in Tübingen, Germany, “We do not yet understand well how to make [AI] systems robust, or how to predict the effect of interventions”. He also mentioned that only focusing on potential ethical problems would impede the AI research in Europe.

What are the reasons why the AI research lags behind the United States and China? First of all, Europe has strong AI research, but a weak AI industry. Startup companies with innovative technologies, which are oftentimes risky, cannot receive enough funds as the old industrial policies favor big, risk-averse firms. So the commission’s announcement underscores the importance of public-private partnerships to support new technology development. The second reason is that salaries are not high enough to keep AI researchers in academia as compared to the salaries in the private sector. To solve this problem, a group of nine prominent AI researchers asked governments to set up an intergovernmental European Lab for Learning and Intelligent Systems (ELLIS), which would be a “top employer in machine intelligence research” and offer attractive salaries as well as “outstanding academic freedom and visibility”.

(Tania Rabesandratana, Science)

Public health

Bill Gates calls on U.S. to lead fight against a pandemic that could kill 33 million

Pandemic diseases, mainly caused by cholera, bubonic plague, smallpox, and influenza, can be devastating to world populations. Several outbreaks of viral diseases have been reported in scattered areas around the world, including  the 2014 Ebola epidemic, leading to growing concerns about the next wave of a pandemic. During an interview conducted last week, Bill Gates discussed the issue of pandemic preparedness with a reporter from The Washington Post. Later, he gave a speech on the challenges associated with modern epidemics before the Massachusetts Medical Society.

The risk of a pandemic is high, as the world is highly connected and new pathogens are constantly emerging as consequences of naturally occurring mutations. Modern technology has brought on the possibility of bioterrorism attacks. In less than 36 hours, infectious disease and pathogens can travel from a remote village to major cities on any continent to become a global crisis. During his speech, Gates cited a simulation done by the Institute for Disease Modeling, which estimates that nearly 33 million people worldwide could be killed by a highly contagious and lethal airborne pathogen like the 1918 influenza. He said “there is a reasonable probability the world will experience such an outbreak in the next 10-15 years.” The risk becomes higher when local government funding for global health security is not adequate. The U.S. Centers for Disease and Prevention is planning to dramatically downsize its epidemic prevention activities in 39 out of 49 countries, which would make these developing countries even more vulnerable to the outbreaks of infectious diseases.

Gates expressed this urgency to President Trump and senior administration officials at several meetings, and he also announced a $12 million Grand Challenge in partnership with the family of Google Inc. co-founder Larry Page to accelerate the development of a universal flu vaccine. He highlighted scientific and technical advances in the development of better vaccines, antiviral drugs and diagnostics, which could provide better preparation for, prevention of and treatment of infectious disease. Beyond this he emphasized that the United States needs to establish a strategy to utilize and coordinate domestic resources and take a global leadership role in the fight against a pandemic.

(Lena H. Sun, The Washington Post)

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

May 1, 2018 at 5:53 pm

Science Policy Around the Web – March 6, 2018

leave a comment »

By: Cindo O. Nicholson, Ph.D.

20180307_Linkpost

source: pixabay

Artificial Intelligence & Ethics

Artificial intelligence could identify gang crimes-and ignite an ethical firestorm

Today, many industries and our favorite gadgets use some form of artificial intelligence (AI) either to make better predictions on user/consumer behavior. AI is also being adopted by police departments to highlight areas where crime is likely to occur thus helping patrol officers prevent crimes before they occur (i.e. predictive policing). Recently at the Artificial Intelligence Ethics & Society (AIES) conference in New Orleans LA, researchers presented a new algorithm that could classify crimes as gang-crimes based on partial information. In particular, the new algorithm can identify gang crimes using only 4 pieces of information: the primary weapon used, the number of suspects, the neighborhood and location (street corner vs. alley for example) where the crime took place.

Many agree that the findings presented (published by the AIES) could change the way the police approach and respond to crimes by classifying the crime beforehand as gang-related. However, not all in attendance were convinced that the new algorithm would be any better than an officer’s intuition and experience. In fact, there were those who believed that there could be unintentional, negative consequences of relying on such an algorithm. A point of contention at the conference was the appearance that the research team did not give sufficient consideration to whether the training data was controlled for bias, or what would happen if individuals were misclassified as gang members.

AI is a powerful technology, and its use can be applied to solve problems in fields like ecology and conservation, public health, drug development, and others. However, like all powerful technology its development must keep pace with its regulation, and the consideration of its potential misuses and unintended consequences.

(Matthew Hutson, Science Magazine)

Science Education

Florida’s residents could soon get the power to alter science classes

The possibility that the public can make recommendations on what instructional materials are used in science classes is moving closer to reality. Two education bills are being considered by Florida’s legislature that would grant Florida’s residents the means to recommend what instructional materials are used in the classrooms of schools in their district.

The education bills being considered would add to a law enacted in June 2017 that grants Florida’s residents the right to challenge the topics educators teach students. In particular, the bills under consideration will allow Florida’s residents to review instructional materials used in class, and suggest changes to the materials. However, the final decision on whether recommendations from residents are accepted would still rest with the school board.

Among the concerns of the scientific community is that these laws would provide a mechanism for creationists, climate-change deniers, and flat-earth proponents (commonly referred to as “flat-earthers”) to insert their non-scientific viewpoints into scientific lesson plans. On the other hand, State Representatives in support of these bills contend that highlighting different viewpoints are important and would allow for debate and drawing one’s own conclusions.

While engaging the public on the content of educational curricula could have its merits, it could have negative consequences when public opinion overrides curricula that have been developed from knowledge gained and refined by rigorous, scientific interrogation over several decades. If more education bills that allow the public to challenge instructional materials are going to be approved, it will be imperative that individuals with scientific backgrounds be a voice of reason on school boards.

(Giorgia Guglielmi, Nature Magazine)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

March 7, 2018 at 11:28 am