Science Policy For All

Because science policy affects everyone.

Archive for April 2019

Science Policy Around the Web – April 30th, 2019

leave a comment »

By: Andrew Wright, BSc

Source: Pixabay

North American drilling boom threatens big blow to climate efforts, study finds

At a time when the most recent Intergovernmental Panel on Climate Change (IPCC) report has determined that CO2emissions must be halved by 2030 to prevent irreversible climate change (and the consequences thereof), it would appear that energy investments are following an opposite trend. According to the Global Energy Monitor’s assessment on pipeline infrastructure, 302 new pipelines are under development, 51.5% of which are being developed in North America. This reflects a current pipeline expansion investment of $232.5 billion as part of a total $1.05 trillion in investments that include processing, storage, export, and other oil and gas related expenses. Even though 80% of these pipelines are dedicated to natural gas infrastructure, should each project be completed and be fully utilized in the United States they would approximately lead to an 11% increase in national CO2emissions by 2040 at a time when those emissions should be approaching a 75% reduction. 

            Ignoring the impacts on global climate, human health, and the associated societal cost, the authors of this infrastructure assessment argue that these pipelines may yield a poor return on their investment. To start, the output of the new North American pipelines far exceeds domestic energy demand and thereby will rely on exporting oil and natural gas to foreign markets.  However, these same markets are boosting their own capacity for fuel production and will likely be less reliant on imports from the North American market. Furthermore, renewable sources of energy have become as cheap or cheaper than their oil and gas counterparts and are expected to continue becoming more affordable as technology improves. Both of these factors threaten to upend the future market these pipeline investments will require in much the same way that cheap natural gas production disrupted the US coal market, which was relying on the same foreign export model before its collapse.

(Oliver Milman, The Guardian

Sexual harassment is pervasive in US physics programs

Sexual harassment is a problem across United States academia. For example, a National Academies of Sciences, Engineering, and Medicine (NASEM)  report from 2018 found that within non-STEM majors roughly 22% of female respondents said they experienced sexual harassment, whereas within STEM majors that percentage ranged from 20% in the Sciences to 47% in Medicine.  However, research published in the journal Physical Review Physics Education Research shows that sexual harassment is particularly pervasive among women pursuing an undergraduate in physics. Of women who responded, 338 of 455, or 74.3%, reported experiencing harassment. In addition, 20.4% of respondents said they experienced all three forms of sexual harassment evaluated: sexual gender harassment, sexist gender harassment, and unwanted sexual attention.

            Much like the NASEM report indicated for all academic fields, the high incidence of sexual harassment observed in physics programs is correlated with negative academic outcomes for those experiencing it. This includes a negative sense of belonging and a higher propensity towards the imposter phenomenon, or attributing personal success to external factors. While large funding institutions, such as the National Institutes of Health and the National Science Foundation, have made a stronger push recently to combat sexual harassment, it is clear that such efforts should be expanded and particular attention should be paid to certain academic fields.

(Alexandra Witze, Nature News



Have an interesting science policy link? Share it in the comments!

Advertisements

Written by sciencepolicyforall

April 30, 2019 at 10:46 am

Recent trends and emerging alternatives for combating antibiotic resistance

leave a comment »

By: Soumya Ranganathan, M.S.

Image by Arek Socha from Pixabay 

Antibiotic resistance is an ongoing and rising global threat. While there is a tendency for bacteria and other microbes to develop resistance to antibiotics and antimicrobials slowly over time, the overuse and abuse of antibiotics has accelerated this effect and has led to the current crisis. The new Global Antimicrobial Surveillance System (GLASS), developed by the World Health Organization (WHO), reveals antibiotic resistance is found in 500,000 people with suspected infections across 22 countries. A study supported by the UK government and the Wellcome Trust estimates that antimicrobial resistance (AMR) could lead to an annual death toll of about 10 million by 2050. It is also predicted to have a huge economic impact and could cost 100 trillion USD between 2017 and 2050

Factors underlying the non-targeted use of antibiotics

Prescribing the right antibiotic for an infection takes about a week due to the process of identifying the infectious agent. To avoid the spread of infection, physicians are forced to make a prognosis prior to agent identification, and typically prescribe a broad-spectrum antibiotic. Since the broad-spectrum antibiotics act against a wide range of bacterial strains, their rampant use has led to the emergence of bacterial strains which are resistant to even the most potent antibiotics available. This trend has caused difficulty in treating previously curable hospital acquired infections and other benign infections. Not only is the discovery of new antibiotics is complicated (only one new class of antibiotics has been developed in the past three decades), the development of antibiotics, from discovery to medicine,  also in general takes about 1 to 2 decades. Here we will explore certain alternative strategies scientists all around the world are pursuing in their fight against antibiotic resistance. 

Antibiotic Susceptibility Test  

Reducing the time between a patient becoming ill and receiving treatment is critical for containing and effectively treating the infection. A part of this process that is currently required entails making improvements to the antibiotic susceptibility testing (AST) system, which typically has two steps: (i) Identifying the infectious agent and (ii) Identifying the most effective antibiotic to treat the infection.

Conceptually, new and rapid AST systems have been proposed and developed thanks to advancements in phenotyping methods, digital imaging and genomic approaches. But a plethora of factors act as roadblocks for implementing rigorous and standardized AST systems worldwide. A recently published consensus statement explores the major roadblocks for the development and effective implementation of these technologies while also suggesting ways to move past this stalemate. The major points of the statement are summarized below. 

  • Regulation– Since different regions and countries have their own requirements for marketing and validating a diagnostic method, the onus is on the developers to meet various demands. This also requires harmonization and cooperation among policy makers to formulate and agree on a standard set of rules.
  • Collection and dissemination of information regarding various strains and antibiotics– Antibiograms are a summary of antimicrobial susceptibility rates for selected pathogens to a variety of antimicrobial drugs, provide comprehensive information about the local antibiotic resistance. The challenge here lies in making the data available in real time and in developing a “smart antibiogram”.This is necessary to perform quicker analysis of samples and to reduce the time to treat which eventually translates to increase in lives saved. 
  • Cost involved in developing new, sensitive, and faster diagnostics– Though current diagnostics are cheap they are slow in identifying pathogenic bacteria. The transition to more advanced and sensitive diagnostics has been slow since their developmenttake time and incur more cost. However, this scenario is likely to change soon with the rising levels of antibiotic resistance that are making existing diagnostics obsolete, effectuating more investment in this sector. 

Antivirulence therapy

Small molecules are gaining prominence as an alternative or as adjuvants to antibiotic treatments. Recently, researchers from Case Western University have developed two small molecules F19 and F12 that show promise in the treatment of methicillin resistant Staphylococcus aureus(MRSA) infection in mouse models. The small molecules bind to a Staph. aureustranscription factor called AgrA, deterring it from making toxic proteins and rendering the bacteria harmless. Treatment with F19 on its own resulted in 100% survival rate in a murine MRSA bacteremia/sepsis model while only 30% of untreated mice survived. This kind of antivirulence therapy allows the immune system to clear the pathogens (since the bacteria are essentially harmless) without increasing pressure to develop resistance. When used as an adjuvant with antibiotic, F19 resulted in 10X times lesser bacteria in the mouse bloodstream than treatment with antibiotic alone. This kind of combination therapy can be used on immunocompromised patients. It has also been effective against other bacterial species such as Staph. epidermidis, Strep. pyogenes, and Strep. pneumoniaeand may act as arsenal for a broad variety of gram-positive bacterial infections. Overall the small molecule approach could also bring many of the previously shelved antibiotics back to use as they provide means to improve their efficacy in treating bacterial infections. Another class of engineered proteins called Centyrins show promise to treat Staph. aureusinfection using a similar mechanism, as they bind to the bacterial toxins and prevent them from disrupting the immune system. 

Molecular Boosters

Stanford University chemists (study published in Journal of the American Chemical Society) have developed a booster molecule called r8 which when used in combination with vancomycin (first line antibiotic used for MRSA infections), helps the antibiotic penetrate the biofilm and remain for a long time, enabling it to attack pathogens once they resurge from their dormant stage. This small molecule booster approach could be pursued further to provide existing antibiotics with additional abilities in sieging the pathogens and arresting the spread of infections.

Photobleaching

A recent collaborative effort by scientists from Purdue University and Boston University has resulted in an innovative light-based approach called photobleaching (using light to alter the activity of molecules) to treat certain bacterial infections. Photobleaching of MRSA using low-level blue (460nm) light has been found to lead to the breakdown of STX, an antioxidant (pigment) found in the membrane of bacteria. Since STX protects the bacteria against neutrophils (a class of white blood cells involved in body’s immune mechanism), prior attempts have been made using medication to eliminate the STX but those efforts have been futile. Photolysis of STX leads to transient increase in the permeability of bacterial membrane, rendering them more susceptible to even mild antiseptics like hydrogen peroxide and other reactive oxygen species. Since pigmentation is a “hallmark of multiple pathogenic microbes” this technology could be extended for use in other microbes to tackle resistance. In addition to advantages such as ease of use and development, photobleaching could also be used with minimal or no adverse side effects. 

Antisense Therapy

One of the consequences of the non-targeted use of antibiotics to treat infections has been the occurrence of C.difficileinfection in the colon. This condition is due to the elimination of useful bacteria along with the harmful bacteria in the gut. To tackle this infection, Dr. Stewart’s team from the University of Arizona has developed an antisense therapy which act by silencing genes responsible for the survival of pathogenic bacteria while sparing other useful bacteria in the gut. This strategy involves using molecules with two components – an antisense oligonucleotide moiety that targets the genetic material in C.diffand a carrier compound to transport the genetic material into the bacterium. Though this treatment approach shows potential in providing a targeted, less-toxic, nimble and cost-effective alternative against existing and evolving pathogens, clinical trials must be undertaken to see its effects in practice.

Future perspectives

In addition to the aforementioned strategies, the scientific community is pursuing immune modulation therapy, host-directed therapy, and probiotics to deal with the current AMR crisis. The problem with developing new antibiotics is that microbes will eventually develop resistance to them. Though time will reveal the approaches that are truly effective in evading antibiotic resistance, the looming threat must be dealt with prudently. A holistic approach to restrict and channel the targeted use of antibiotics while pursuing alternative therapies must be adopted by the clinicians, researchers, companies, global health experts, public and policy makers to curb the resistance emergency.

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

April 26, 2019 at 4:35 pm

Posted in Essays

Tagged with , , ,

Science Policy Around the Web – April 26, 2019

leave a comment »

By: Mary Weston, Ph.D.

Source: Pixabay

World’s first malaria vaccine to go to 360,000 African children

On Tuesday, the World Health Organization (WHO) announced the launch of a large-scale pilot of the first malaria vaccine ever developed. 360,000 children under 2 years old will be vaccinated per year across three African countries (Malawi started vaccinating this week and Ghana and Kenya will began in the next couple weeks). The combined effort could immunize up to one million children by 2023. Children under five years old are at the most risk for life-threatening complications from malaria and more than 250,000 children in Africa die from the disease every year. 

The vaccine was developed by GlaxoSmithKline (GSK) and the PATH Malaria Vaccine Initiative (MVI) with support from the Gates Foundation. Data from clinical trials indicates it only provides partial protection, preventing around 40% of malaria cases. Thus, the vaccine is meant to complement existing solutions to preventing malaria ( e.g.bed nets, insecticide, and rapid diagnosis and treatment of the disease).  

Malaria is a parasitic infection that is transmitted via a bite from the female Anopheles mosquito. While the disease is preventable and treatable, an estimated 435,000 people die from it each year. The newly developed vaccine protects against P. falciparum, the most prevalent malaria strain found in sub-Saharan Africa.

The vaccine, known as RTS,S or Mosquirix, has taken decades to develop. It is given in four doses: 3 doses provided between the first five and nine months of age and the last delivered around the 2ndbirthday. While this is a big step, some malaria researchers are questioning the implementation of this vaccine when other, more effective vaccines are currently in clinical trials. However, even 40% efficacy will be very helpful in combating this devastating disease.

(Katie Hunt, CNN)

Drug Distributor And Former Execs Face First Criminal Charges In Opioid Crisis

For the first time, federal criminal charges were brought against a pharmaceutical distributer for its role in perpetuating the US’s deadly opioid crisis. Rochester Drug Co-Operative (RDC), the 6th largest distributor in the US, was charged with conspiring to distribute controlled narcotics (fentanyl and oxycodone), defrauding the United States government, and willingly failing to file suspicious order reports. Separate individual charges were also brought against two of their former executives.

Distributors connect drug makers to pharmacies and they are charged with monitoring drug distribution to ensure there is no abuse. However, this monitoring seems ineffectual at best. In one extreme example, an investigation by the Charleston Gazette Mail reported that a single pharmacy in the small town of Kermit, West Virginia (population 392) received 9 million hydrocodone pills over a two year period from out of state drug companies. 

In the RDC case, the US attorney in Manhattan, Geoffrey S. Berman, argues that greed has been the primary motivator for this abuse. Prosecutors said that RDC’s executives ignored warning signs and distributed tens of millions of fentanyl products and oxycodone pills to pharmacies they knew were distributing drugs illegally, resulting in massive profits. RDC has effectively admitted to violating federal narcotics laws and has agreed to pay a $20 million fine and will be supervised by an independent monitor over the next five years.

More than 700,000 people have died from drug overdoses over the last 20 years, the majority of which have been attributed to opioids, and some estimates predict hundreds of thousands more could die in the next decade due to opioid overdoses alone. 

Addiction treatment is underfunded in the US and the White House Council of Economic Advisers estimated that the crisis cost $500 billion in economic losses in 2015 alone. Hundreds of lawsuits across the country have been filed against opioid makers, producers, and distributors in hopes of holding them accountable, preventing misbehavior in the future, and receiving money to offset the costs of the crisis on the public. 

(Richard Gonzales, NPR)


Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

April 26, 2019 at 9:30 am

Science Policy Around the Web – April 24, 2019

leave a comment »

By: Patrick Wright, PhD

Image by mohamed Hassan from Pixabay 

Why Some Anti-bias Training Misses the Mark

A new study published in the Proceedings of the National Academy of Sciences(PNAS) entitled “The mixed effects of online diversity training” reports that online diversity-training programs aimed at reducing gender and racial bias among employees do not substantially affect workplace behavior, particularly among male employees.

The study cohort consisted of 3,016 volunteers (61.5% men) that were all salaried employees across 63 nations of a single global professional-services business. Each participant was randomly assigned to one of three anti-bias sessions: gender bias training, general-bias training, and a control group that received no bias-specific training. Training for the treatment conditions was divided into five sections, including “What are [gender] stereotypes and why do they matter?” and “How can we overcome [gender] stereotypes?” (the word “gender” was excluded from general-bias training sessions). On the other hand, the control condition contained sections such as “Why is inclusive leadership important?” and “What makes teams more inclusive?”; bias nor stereotyping were ever explicitly mentioned. 

Authors acquired data on attitudinal shifts and behavioral changes for up to five months after the training. All volunteers were asked to complete a follow-up survey to help address inequalities that women and racial minorities face in the workplace. Additionally, one a week for 12 weeks after completion of training, employees were texts that included such comments as “Have you used any inclusive leadership strategies this week? Respond Y for Yes and N for No”

Interestingly, authors observed no positive shifts in behavior among male volunteers. Only members of groups that are commonly impacted by bias (e.g. under-represented minorities) were observed to change their behavior. Lead author Edward Chang summarized this finding: “The groups that historically have had more power – white people and men – didn’t move much”. Women volunteers who participated in the training sought mentorship from senior colleagues and offered mentorship to junior female colleagues after the sessions. 

Chester Spell, a Professor of Management in at the Rutgers School of Business in Camden, New Jersey who studies behavioral and psychological health in organizations, believes that for diversity training to be truly impactful, it “has to be part of the DNA of an organization, not an appendix.” Organizations must show that they are serious about fighting bias through a committing to offering many initiatives aimed at educating about the presence and effects bias. Recently, in Spring of 2018, Starbucks closed 8,000 stores on a Tuesday afternoon for a four-hour anti-bias training, specifically racial tolerance, for employees This was in response to a prior incident in which a Philadelphia-area Starbucks café manager call to police resulted in the arrests of two black men who were in the café waiting for a friend. However, Starbucks did not comment on future training plans. 

The most effective means of implementation for anti-bias training plans are still not established. This is an active area of ongoing area of research, especially regarding the idea delivery method and number of sessions. Bezrukova et al, described in a 2016 meta-analysis spanning 40 years on the impact of diversity training, observed little effect of stand-alone diversity trainings on employees’ attitudes toward bias. Offering repeated or longer training sessions that are complemented with other approaches, including deciding hiring criteria prior to candidate evaluation, may be the best approaches going forward. However, individuals in academia have a more favorable opinion and are more receptive of these trainings than those from the business sector. Ülger and colleagues reported in a meta-analytic review across 50 studies of in-school interventions on attitudes toward outgroup members ((members of different ethnic, religious, age groups etc.) that statistically significant, moderate changes in outgroup attitudes can be obtained via anti-bias programs in school. However, there was no evidence that teacher-led or media-based interventions produce positive outcomes compared to the positive outcomes achieved by researcher-led interventions. Notably, one-on-one interventions were the most impactful. 

 (Virginia Gewin, Nature)

Universities Will Soon Announce Action Against Scientists Who Broke NIH Rules, Agency Head Says

During a recent Senate Appropriations Subcommittee hearing in early April, Dr. Francis Collins, Director of the National Institutes of Health (NIH), described that over the rest of the month, many universities will announce action against faculty members who did not comply with agency rules on protecting the confidentiality of peer review, handling intellectual property, and disclosing foreign ties. Dr. Collins told Senator Roy Blunt (R-MO), chair of the subcommittee, that there are ongoing investigations at more than 55 U.S. institutions and that some scientists have been deemed guilty of not disclosing foreign funding for work that was also being supporting by NIH. 

The push to systematically uncover potential violations of these intellectual property and confidentiality rules began in August 2018, when Dr. Collins wrote the 10,000 institutions receiving NIH funding to request them to look for any instances of concerning behavior. Dr. Collins spoke of faculty researchers already being fired: “There are increasing instances where faculty have been fired, have been asked to leave the institution, many of them returning back to their previous foreign base.” For example, the MD Anderson Cancer center, part of the University of Texas system, announced last week that they have fired three senior researchers that committed potentially “serious” violations of rules involving confidentiality of peer review and foreign ties disclosure after they were identified by NIH. 

However, both Dr. Collins and Senator Blunt emphasized that this is not a pervasive problem; most foreign scientists working in the United States and funded by the NIH follow funding and disclosure rules. “We need to be careful that we don’t step into something that almost seems a little like racial profiling.”, Dr. Collins stated at the hearing. 

 (Jocelyn Kaiser, Science)



Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

April 25, 2019 at 10:06 am

Science Policy Around the Web – April 19, 2019

leave a comment »

By: Neetu Gulati, PhD

Image by Raman Oza from Pixabay 

Scientists Restore Some Function in the Brains of Dead Pigs 

Hours after the animals were killed, scientists have partially revived the brains of dead pigs. This contradicts the dogma surrounding death. Cutoff from oxygen, the brain of a mammal is supposed to die after about 15 minutes. The process was thought to be widespread and irreversible: after the cells in the brain die they cannot be brought back. A study published in Nature has challenged this dogma. While none of the tested brains regained signs of consciousness, the Yale researchers were able to demonstrate that cellular function was either preserved or restored.

The study used 32 brains from pigs that were slaughtered for food. After waiting four hours, well past the 15 minutes of oxygen deprivation needed to “kill” the brain, they hooked them up to a system that pumped in a cocktail of specially formulated nutrients and chemicals called BrainEx for six hours. Compared to brains not given the BrainEx, the treated brains had more preserved structure and less cell death, and some cellular functions were restored. Nevertheless, Nenad Sestan, the lead researcher on the project, was quick to point out that while the brains had some restored activity, “this is not a living brain.”

In fact, the goal of the study was not to restore consciousness, which could lead to many ethical concerns. The scientists monitored electrical activity in the brains and intended to stop any signs of consciousness that may have been detected. Stephen Latham, a bioethicist that worked with the team explained that they would need more ethical guidance before trying any studies that altered consciousness in the pigs brain. To avoid this, the BrainEx cocktail also included a drug known to dampen neuronal activity.

The implications of this study are vast. The breakthrough will hopefully create a better link between basic neuroscience and clinical research, but also even with the ethical considerations, it is likely that people will eventually want to apply this technology to human brains. It may lead to interesting policy discussions, because currently, while there are many restrictions on what can be done with living research animals or human subjects, there are much fewer restrictions on the dead. It may also affect organ transplantation efforts from brain-dead individuals, as they may eventually become candidates for brain revival. A lot still needs to be investigated in the meantime, but the implications are vast and mind-blowing.

(Nell Greenfieldboyce, NPR)

Darkness Visible, Finally: Astronomers Capture First Ever Image of a Black Hole

Last week it was announced that scientists had captured the image of the shadow of a black hole for the first time in history. The image is the result of an international collaboration consisting of 200 members of the Event Horizon Telescope team. The results were simultaneously announced at news conferences in six locations around the world, including at the National Science Foundation

The data was collected over a 10-day period around the world using eight telescopes, focused on Messier 87 (M87), a giant galaxy within the constellation Virgo. It is within M87 that a black hole billions of times larger than the sun was visualized. After collecting data, it took two years of computer analysis to produce the blurry image of a lopsided ring of light around a dark circle.

Black holes like the one found in M87 are supermassive dense objects that gravity pulls so strongly that no matter can escape. According to Einstein’s principles of general relativity, the collapse of space-time within a black hole can even prevent light from escaping. The first official proof of the existence of black holes came in 2016 when LIGO detected the collision of a pair of black holes. Now, merely three years later, the world has photographic evidence, and features of the black hole can be determined, including its mass: 6.5 solar masses, heavier than most pervious determinations.

Moving forward, the Event Horizon Telescope partnership plans to continue observations of M87 and collect data of other regions of space. The telescope network also continues to expand: earlier this year another telescope was added to the collaboration, with more antennas also expected to join soon. The collaboration will continue to observe black holes and monitor their behavior to see how things change.

(Dennis Overbye, New York Times


Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

April 21, 2019 at 12:10 pm

The need for regulation of artificial intelligence

leave a comment »

By: Jayasai Rajagopal, Ph.D.


Source: Wikimedia

The development and improvement of artificial intelligence (AI) portends change and revolution in many fields. A quick glance at the Wikipedia article on applications of artificial intelligence highlights the breadth of fields that have already been affected by these developments: healthcare, marketing, finance, music and may others. As these algorithms increase their complexity and grow in their ability to solve more diverse problems, the need to define rules by which AI is developed becomes more and more important.

            Before explaining the potential pitfalls of AI, a brief explanation of the technology is required. Attempting to define artificial intelligence begs the question of what is meant by intelligence in the first place. Poole, Mackworth and Goebel clarify that for an agent to be considered intelligent, they must adapt to their surrounding circumstances, learn from changes in those circumstances, and apply that experience in pursuit of a particular goal. A machine that is able to adapt to changing parameters, adjust its programming, and continue to pursue a specified directive is an example of artificial intelligence. While such simulacra are found throughout science fiction, dating back to Mary Shelly’s Frankenstein, they are a more recent phenomenon in the real world. 

            Development of AI technology has taken off within the last few decades, as computer processing power has increased. Computers began successfully competing against humans in chess as early as 1997 with DeepBlue’s victory over Garry Kasparov. In recent years, computers have started to earn victories in even more complex games such as Go and even video games such as Dota 2. Artificial intelligence programs have become common place for many companies which use them to monitor their products and improve the performance of their services. A report in 2017 found that one in five companies employed some form of AI in their workings. Such applications are only going to become more commonplace in the future.

In the healthcare field, the prominence of AI is readily visible. A report by BGV predicted a total of $6.6 billion invested into AI within healthcare by the year of 2021. Accenture found that this could lead to saving of up to $150 billion by 2026. With the recent push towards personalized and precision medicine, AI can greatly improve the treatment and quality of care. 

However, there are pitfalls associated with AI. At the forefront, AI poses a potential risk for abuse by bad actors. Companies and websites are frequently reported in the news for being hacked and losing customer’s personal information. The 2017 WannaCry attack crippled the UK’s healthcare system, as regular operations at many institutions were halted due to their compromised data infrastructures. While cyberdefenses will evolve with the use of AI, there is a legitimate fear that bad actors could just as easily utilize AI in their attacks. Regulation of use and development of AI can limit the number of such actors that could access those technologies.

Another concern with AI is the privacy question associated with the amount of data required. Neural networks, which seek to imitate the neurological processing of the human brain, require large amounts of data to reliably generate their conclusions. Such large amounts of data need to be curated carefully to make sure that identifying information that could compromise the privacy of citizens is not easily divulged. Additionally, data mining and other AI algorithms could information that individuals may not want revealed. In 2012, a coupon suggestion algorithm used by Target was able to discern the probability that some of their shoppers were pregnant. This proved problematic for one teenager, whose father wanted to know why Target was sending his daughter coupons for maternity clothes and baby cribs. As with the cyberwarfare concern, regulation is a critical component in protecting the privacy of citizens.

Finally, in some fields including healthcare, there is an ever present concern that artificial intelligence may replace some operations entirely. For example, in radiology, there is a fear that improvements in image analysis and computer-aided diagnosis by the use of neural networks could replace clinicians. For the healthcare field in particular, this raises several important ethical questions. What if the diagnosis of an algorithm disagrees with a clinician? As the knowledge an algorithm has is limited by the information it is exposed to, how will it react when a unique case is presented? From this perspective, regulation of AI is important not only to address practical concerns, but also pre-emptively answer ethical questions.

While regulation as strict as the Asmiov’s Three Laws may not be required, a more uniform set of rules governing AI is required. At the international level, there is much debate among the members of the United Nations as to how to address the issue of cyber security. Other organizations, such as the European Union, have made more progress. A document recently released by the EU highlights some ethical guidelines which may serve as the foundation for future regulations. At the domestic level, there has been a push from scientists and leaders in the field towards harnessing the development of artificial intelligence for the good of all. In particular, significant headway has been made in the regulation of self-driving cars. Laws passed in California restrict how the cars can be tested and by 2014, four states already had legislation applying to these kinds of cars. 

Moreover, the FDA recently released a statement expressing their approach to the regulation of artificial intelligence in the context of medical devices. At the time of this writing, there is a discussion paper that is open for commentary describing the proposed approach that the FDA may take. They note that the conventional methods of acquiring pre-market clearance for devices may not apply to artificial intelligence. The newly proposed framework adapts existing practices to the context of software improvements.  

Regulation must also be handled with care. Over-limitation of the use and research in artificial intelligence could lead to stifling of development. Laws must be made with knowledge of the potential benefits of new technological advancements could cause. As noted by Gurkaynak, Yilmaz, and Haksever, lawmakers must strike a balance between preserving the interests of humanity and the benefits of technological improvement. Indeed, artificial intelligence poses many challenges for legal scholars.

In the end, artificial intelligence is an exciting technological development that can change the way we go about our daily business. With proper regulation, legislation, and research focus, this technology can be harnessed in a way that benefits the human experience while preserving development and the security of persons.

Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

April 18, 2019 at 2:25 pm

Science Policy Around the Web – April 16, 2019

leave a comment »

By: Mary Weston, PhD

Source: Wikimedia

Astronaut twins study spots subtle genetic changes caused by space travel

In 2015, NASA began their Twins Study, where they evaluated the biological effects of one year of spaceflight on an astronaut by comparing him to his earthbound identical twin. One year after returning to earth, the majority of observed physiological changes from space reverted back to the astronaut’s original state, with only subtle genetic changes remaining. 

Spaceflight exposes the body to ionizing radiation and near-zero gravity, and the consequences of long-term exposure to these conditions are not known. On this mission, Scott Kelly spent 340 days in space from 2015-2016 (he has a lifetime total of 520 space days). His brother Mark, a retired astronaut who had previously spent 54 days in space over four space-shuttle missions, remained on earth and acted as a near identical biological control. The study involved only two people, so not all findings may be applicable to other astronauts, but NASA hopes to use the information to direct future astronaut health studies.

Teams of researchers gathered a wide array of genomic, molecular, physiological, and other data on the men before, during, and after the mission. They reported that Scott Kelly did display signs of stress from space travel, with changes seen in most areas measured. 

However, now researchers are finding that most of the changes Scott Kelly experienced from spaceflight have reverted back to their original state after 6 months of being back on earth. NASA argues that “the Twins Study demonstrated the resilience and robustness of how a human body can adapt to a multitude of changes induced by the spaceflight environment”.

One genetic change that did persist six months after Scott’s return was to his chromosomes. Parts of them inverted (flipped), which could lead to DNA damage, and is possibly due to the large amounts of space radiation. Further, researchers hypothesized that space flight would shorten telomers, important caps at the end of chromosomes, since they decrease with age and spaceflight is expected to stress the body similar to aging. However, a majority of his telomers lengthened while Scott Kelly was in space, while only few shortened. Those that lengthened returned to their normal state after about 48hrs on earth, but the shortened ones remained. 

Given the space community’s interest in increasingly ambitious space missions and plans to explore Mars, studies exploring the long-term health impacts of spaceflight will be extremely important for the future.

(Alexandra Witze, Nature


Abnormal Levels of a Protein Linked to C.T.E. Found in N.F.L. Players’ Brains, Study Shows

Last week, the New England Journal of Medicine published a study that used experimental brain scans to compare the levels and distribution of tau, a protein linked to chronic traumatic encephalopathy (CTE), in retired NFL players and male controls who had never played football. They found that the NFL players had elevated levels of tau in areas where the protein had previously been detected postmortem. 

CTE is associated with repetitive hits to the head, like those encountered during contact/collision sports. Currently, pathologists can only posthumously diagnose CTE. This new study is the first to evaluate tau averages and overall patterns from a group of living former football players (26 men) with a control group (31 men). The project, led by Dr. Robert Stern of Boston University, used Positron Emission Tomography (PET) scans to image the brain after exposure to a radiolabeled substrate that specifically binds tau. 

Both the study’s authors and outside experts emphasize that a CTE diagnostic test is still far from ready and would likely include other markers from blood and spinal fluid as well.  However, this study represents a preliminary, first step towards developing a clinical test to detect CTE in living players, which may also ultimately assist in identifying early disease signs and those with potential risk of developing CTE. 

The relationship between CTE symptoms and the role of tau, which occurs naturally in the brain, is not clear. The study found no correlation between the amount of abnormal tau and the severity of cognitive and mood problems in the players. However, these results are preliminary and the player sample size was small. Evaluation of larger sample size of football players is needed to continue to explore the role of tau and replicate the observed elevated levels found in this paper. 

(Ken Belson and Benedict Carey, New York Times


Have an interesting science policy link? Share it in the comments!

Written by sciencepolicyforall

April 17, 2019 at 9:34 am