Science Policy For All

Because science policy affects everyone.

Posts Tagged ‘health policy

Science Policy Around the Web – April 14, 2017

leave a comment »

By: Leopold Kong, PhD

Fatty foods: By Lucasmartin2 (Own work) [CC BY-SA 4.0], via Wikimedia Commons

Health Policy

Banning Trans Fats in New York Prevented Thousands of Heart Attacks

In an effort to lower the incidence of heart disease, the leading cause of death in the United States, the FDA will prohibit food manufacturers from using trans fats next summer. FDA’s decision was based on decades of research linking trans fat consumption with increased risk of heart disease. A study published this Wednesday in JAMA Cardiology provided further support for the ban. Using data from the New York State Department of Public Health, collected from 11 counties where trans fats restriction was recently implemented, the researchers showed a statistically significant decline in heart attack (7.8%) and stroke (3.6%) events since then. “The most important message from these data is that they confirm what we predicted — benefit in the reduction of heart attacks and strokes,” said the lead author, Dr. Eric J. Brandt, a fellow in cardiovascular medicine at Yale. “This is a well-planned and well-executed public policy.” With the rising cost of health care in the United States, the FDA’s long awaited trans fat ban is urgently needed to lighten the public health burden. (Leah Samuel, STATNews)

Vaccine Research

The Human Vaccines Project, Vanderbilt and Illumina Join Forces to Decode the Human Immunome

Rapidly evolving viruses such as HIV and Hepatitis C have been difficult targets for traditional vaccine development, in which inactivated viruses or viral proteins are used as vaccine components. Despite the success of small molecule therapeutics against HIV and Hepatitis C, an effective vaccine remains the most cost effective solution to curb the global pandemics caused by these viruses. Scientists now seek to optimize vaccine candidates based on a deeper understanding of host-pathogen interactions using multidisciplinary approaches, ranging from protein engineering and evolutionary biology to immunology and genetics. To facilitate these sophisticated efforts, the Human Vaccines Project, an international public-private collaboration, was established. A major initiative of the project, the Human Immunome Program, is led by Vanderbilt University Medical Center. Now, Illumina has joined the collaboration to help decipher the genetic features of the immune system, or the “immunome,” using cutting edge sequencing technology. DNA sequences from immune cells during infection may capture how the immune system adapts to viruses, providing guidelines for vaccine design. “Successfully defining the human immunome will provide the foundational knowledge to usher in a new era of vaccine, diagnostic, and therapeutic development,” says Gary Schroth, vice president for product development at Illumina. Greater understanding of the immunome may also lead to more effective cancer vaccines. (Human Vaccines Project)

 

Have an interesting science policy link?  Share it in the comments!

Advertisements

Science Policy Around the Web – February 21, 2017

leave a comment »

By: Rachel Smallwood, PhD

Obesity

Should We Treat Obesity Like a Contagious Disease?

Researchers are modeling obesity from a public health perspective as a contagious disease. There are many factors associated with obesity, including genetics, low levels of physical activity, and high caloric intake. An earlier study examined the effects of different social factors on an individual’s risk of being obese; it found that people with obese friends and family were at an increased risk for obesity, and this trend was influenced by how close the relationships were.

In this model of the prevalence of obesity, the researchers included a factor to represent obesity as a “social contagion”, reflecting those previous findings and indicating a potential increased risk and increased prevalence due to transmission from one person to another. This mechanism is assumed to be related to people adopting the behaviors of those close to them; notably, activity levels and type and quantity of food consumed. The model predicts obesity rates in populations with terms associated with the genetic contribution to obesity, the mother’s non-genetic contribution to her offspring, and the prevalence of obesity. Essentially, the more obese individuals there are in a society, the more likely it is for someone to know and interact with an obese person.

The models indicate that obesity prevalence plateaus around 35-40% without an intervention. The model is still fairly primitive, but the researchers hope that in future it could provide insight into the effects of potential interventions. For example, is it better to target an intervention to individuals who are already obese, or should the reach of the intervention be more broad and target the population as a whole? When the models reach a level of complexity comparable to the existing factors for obesity, they can be a powerful tool in preventing and addressing the epidemic. (Kelly Servick, Science Magazine)

Autism

Brain Scans Spot Early Signs of Autism in High-Risk Babies

A study recently published in Nature showed that alterations in brain development in children who go on to be diagnosed with autism precede behavioral symptoms. High-risk infants’ brains were scanned with MRI at 6, 12, and 24 months. It was determined that the infants who were subsequently diagnosed with autism had a faster rate of brain volume growth between 12 and 24 months. Additionally, between 6 and 12 months, these infants had a faster rate of growth in the surface area of folds on the brain, called the cortical surface.

Taking these findings, the research team used a machine learning approach called a deep-learning neural network to make a model to predict whether an infant would be diagnosed with autism based on their MRIs from 6 and 12 months. This model was tested in a larger set of infants, and the model correctly predicted 30 out of 37 infants who went on to be diagnosed (true positives), and it incorrectly predicted that 4 infants would be diagnosed with autism out of the 142 who were not later diagnosed (false positives). These results are much more robust than behavior-based predictions from this same age range.

More work needs to be done to replicate the results in a larger sample. Additionally, all of the participants were high-risk infants, meaning they had a sibling who was diagnosed with autism, so the results are not necessarily generalizable to the rest of the population. Further studies need to be done in the general population to determine if these same patterns are observable, but that would require an even larger sample due to the lower risk. However, the early detection of symptoms and prediction of diagnosis are potentially valuable tools, especially considering another recent publication showed that early intervention in children with autism affects the severity of symptoms years down the road. (Ewen Callaway, Nature News)

Science Funding

Ebola Funding Surge Hides Falling Investment in Other Neglected Diseases

Funding totals from 2015 reveal a trending decrease in funding for neglected diseases, excluding Ebola and other viral hemorrhagic fevers. Neglected diseases are diseases that primarily affect developing companies, thus providing little incentive for private research and development by commercial entities; the other diseases include malaria, tuberculosis, and HIV/AIDS. Given the recent surge of funding for Ebola research, the analysis firm, Policy Cures Research, decided to separate it from the other neglected diseases in its analysis to observe funding patterns independent from the epidemic that dominated the news and international concerns. Funding was tracked from private, public, and philanthropic sources.

The funding for Ebola research has primarily gone to development of a vaccine, and over a third of the funds were provided by industry. For the other diseases, the decline in overall funding is mostly represented by a decline in funding from public entities, primarily comprised of the governments of large, developed countries. Those countries accounted for 97% of the research funding for neglected diseases in 2015, so any significant change in that funding category would affect the overall funding amounts. However, there was also a slight decline in philanthropic funding. When including Ebola with the others, funding of neglected diseases was actually at its highest in the past ten years. It is not known whether money was funneled from the other diseases to Ebola research, or if this decline is indicative of less research spending in general. (Erin Ross, Nature News)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

February 21, 2017 at 10:03 am

Science Policy Around the Web – November 22, 2016

leave a comment »

By: Rachel Smallwood, PhD

Photo source: pixabay

Federal Research Funding

US R&D Spending at All-Time High, Federal Share Reaches Record Low

Recently released data from the National Science Foundation (NSF) showed trending increases in scientific research funding in the US across the past several years. Estimates of the total funding for 2015 put the value at an all-time high for research and development (R&D) funding for any country in a single year. In 2009, President Obama stated a goal to devote 3% of the USA’s gross domestic product (GDP) to research, and we have been making slow progress to that point; in 2015, 2.78% of the GDP went to research. Businesses accounted for the largest portion of overall scientific funding, contributing 69% of the funds. The second largest contributor was the federal government; however, it had the lowest percentage share of the total since the NSF started tracking funding in 1953, and the actual dollar amount contributed has been declining since 2011. Therefore, although the overall percentage of GDP going to research is increasing, that increase is driven by businesses, whereas the GDP percentage contributed by the federal government has dropped to almost 0.6%.

When taking a closer look at types of research, the federal government is the largest funding source for basic science research, covering 45% of the total. However, businesses make up the majority of the funding for applied research (52% in 2014) and experimental development (82% in 2014). This disproportionality in funding types combined with the decreases in federal research spending are concerning for the basic science field. There is more competition for less money, and this concern is compounded by uncertainty and questions about President-Elect Trump’s position on and plans for scientific funding. Aside from a couple of issues, primarily concerning climate change and the environment, he has said very little about science and research. Many scientists, institutions, and concerned citizens will be watching closely to see how science policy develops under Trump’s administration and its effects on federal spending and beyond. (Mike Henry, American Institute of Physics)

Biomedical Research

‘Minibrains’ Could Help Drug Discovery for Zika and for Alzheimer’s

A group of researchers at Johns Hopkins University (JHU) is working on a promising tool for evaluating disease and drug effects in humans without actually using humans for the tests. ‘Minibrains’ are clusters of human cells that originated as skin cells, reprogrammed to an earlier stage of development, and then forced to differentiate into human neural cells. They mimic the human brain as far as cell types and connections, but will never be anywhere near as large as a human brain and can never learn or become conscious.

A presentation earlier this year at the American Association for the Advancement of Science conference showcased the potential utility for minibrains. A large majority of drugs that are tested in animals fail when introduced in humans. Minibrains provide a way to test these drugs in human tissue at a much earlier stage – saving time, money, and animal testing – without risking harm to humans. Minibrains to test for biocompatibility can be made from skin cells of healthy humans, but skin cells from people with diseases or genetic traits can also be used to study disease effects.

A presentation at the Society for Neuroscience conference this month demonstrated one such disease – Zika. The minibrains’ growth is similar to fetal brain growth during early pregnancy. Using the minibrains, Dr. Hongjun Song’s team at JHU was able to see how the Zika virus affected the cells; the affected minibrains were much smaller than normal, a result that appears analogous to the microcephaly observed in infants whose mothers were infected with Zika during the first trimester.

Other presentations at the meeting showcased work from several research groups that are already using minibrains to study diseases and disorders including brain cancer, Down syndrome, and Rett syndrome, and plans are underway to utilize it in autism, schizophrenia, and Alzheimer’s disease. Though there might be a bit of an acceptance curve with the general public, minibrains potentially offer an avenue of testing that is a better representation of actual human cell behavior and response, is safer and more affordable, and reduces the need for animal testing. (Jon Hamilton, NPR)

Health Policy

A Twist on ‘Involuntary Commitment’: Some Heroin Users Request It

The opioid addiction epidemic has become a significant healthcare crisis in the United States. Just last week the US Surgeon General announced plans to target addiction and substance abuse. He also stated the desire for a change in perception of addiction – it is a medical condition rather than a moral or character flaw. Earlier this year, the Centers for Disease Control published guidelines that address opioid prescribing practices for chronic pain, strongly urging physicians to exhaust non-pharmacologic options before utilizing opioids. In response to the rising concern over prescription opioid abuse, steps have been taken to reduce prescriptions and access. This has resulted in many turning to heroin – which is usually a cheaper alternative anyway – to get their opioid fix.

One of the first steps in treatment and recovery for addiction and dependence is detoxing. However, opioids are highly addictive and many people struggle with the temptation to relapse. Additionally, many of the programs designed to help with the initial detox have long wait lists, are expensive, and may not be covered by insurance, further deterring those with addiction and dependence from getting the help they need. These factors have caused many to start turning to their states, asking to be voluntarily committed to a program on the basis that they are a danger to themselves or others because of their substance abuse. This is currently an option in 38 states. These programs can be held in either privately-run institutions or in state prisons. However, this practice is controversial because if the person’s insurance does not cover their stay, it falls to tax payers to foot the bill. While this is unpopular with some, advocates say the civil commitment laws are important options while there may be no other immediate ways for an individual to get help. (Karen Brown, NPR)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

November 22, 2016 at 9:00 am

Science Policy Around the Web – November 4, 2016

leave a comment »

By: Courtney Kurtyka, PhD

Source: Flickr, by Wellington College, under Creative Commons

Science Education

Unexpected results regarding U.S. students’ science education released

The National Assessment of Educational Progress (NAEP) is a nation-wide exam and survey used in the United States to ascertain student knowledge and education in key areas. Recently, the 2015 science education results from fourth, eighth, and twelfth graders in the United States were released, and showed some surprising outcomes. Out of seven different hands-on activities that students were asked if they completed as part of their curriculum, only one (simple machines) showed a positive correlation between activity participation and scores on the exam. Some activities (such as using a microscope or working with chemicals) showed no correlation with scores on the exam, while students who engaged in activities such as handling rocks and minerals actually performed worse than students who did not. Furthermore, not as many students engage in scientific activities as part of their curriculum as one might expect. For example, 58% said that they never used simple machines in class, while 62% say they never or rarely work with “living things”.

An anonymous expert on the assessment suggested that one potential explanation for these unexpected results is that the assessment asks whether students completed any of these activities “this year”. Therefore, for the results from twelfth graders, students who use rocks and minerals in class tend to be in lower-level science courses, and are more likely to not perform as well on the exam as students in higher-level courses that would not include that activity. However, this does not account for the low level of reporting of scientific activities overall.

Another concerning aspect of the exam is related to the reporting of the results. The National Center for Education Statistics (NCES), which manages the NAEP, operates a website that is both difficult to use and incomplete. In fact, when using the drop-down menu of results from the survey, only the results of activities that have positive correlations with test scores are listed. NCES has said that they show results based on what they think are of greatest interest to the public.

While some cite the positive results as a reflection of the success of active learning techniques, others note that 40% of twelfth graders who took the NAEP did not have a “basic” knowledge of science. Additionally, these results are interesting for many because the twelfth graders reflect the first students to have spent their entire education under No Child Left Behind, which mandated annual assessment of reading and math for third through eighth graders. Since many have argued that this law leaves less room for teaching topics that are not tested (such as science), examining students’ scientific performance under these guidelines is important. (Jeffrey Mervis, Science Magazine)

Health Disparities

Sexual and gender minorities are officially recognized as a minority health population

The National Institute on Minority Health and Health Disparities (NIMHD), one of the institutes and centers within the National Institutes of Health, recently officially recognized sexual and gender minorities (SGM) as a distinct minority health population. The SGM population is very diverse, including lesbian, gay, bisexual, and transgender communities, as well as those from additional sexual and gender classifications that differ from various norms (such as traditional, cultural, etc.).

Multiple health disparities (meaning that the likelihood of disease and death from particular diseases and disorders in that group differ from the average population) have been identified in the SGM population. Some of these issues include a lower likelihood of women who have sex with women getting Pap smears and mammograms, and higher rates of depression, panic attacks, and psychological distress in gay and bisexual men.

Previously, the NIH requested a report on SGM health that was published in 2011, and later created the Sexual and Gender Minority Research Office (SGMRO) following the results of the report. Now, this official designation will allow researchers focused on SGM health to be able to apply for health disparity funding from the NIH, and Karen Parker (the director of the SGMRO at the NIH) said that she hopes that it will lead to increased interest in applications to support health research related to this population. (Nicole Wetsman, STAT)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

November 4, 2016 at 9:00 am

Science Policy Around the Web – October 21, 2016

leave a comment »

By: Leopold Kong, PhD

Source: Flickr, under Creative Commons

2016 Elections

The polling crisis: How to tell what people really think

The conflicting polling results for the US presidential elections have been a source of no small confusion for American voters. Skepticism over polling is further justified by recent failures, as in the 2013 provincial elections in British Columbia when the Liberal Party won against expectations, or the Brexit referendum. Two major challenges make polling less accurate, and changes are underway to address these issues.

The first major challenge is obtaining public opinion. In the past, pollsters can simply call people at home, but this is increasingly difficult with the rise of cell phone use. Currently, only 50% of US households have landlines compared to 80% in 2008. Federal regulations require mobile phones be called manually, and people often don’t answer cell phones from an unfamiliar number. People who do answer these numbers might represent a biased population. Despite these limitations, calling cell phones are more accurate than online polls, which are less regulated and could easily be manipulated. Using texting instead of direct calls could also increase response rates.

The second major challenge is predicting who will vote, which is particularly difficult in the US with low voter turnouts of about 45-50%. To predict this, each pollster organization uses a proprietary mix of factors such as voting history and political engagement. “Likely voter modeling is notoriously the secret-sauce aspect of polling,” says Courtney Kennedy, Director of survey research at the Pew Research Center in DC. Furthermore, these models may generate unconscious bias for pollsters to “herd” polling to better reflect predicted expectations. Improvements are underway, including using a probability model versus a discrete yes/no model, and greater transparency in methodology.

With the changing face of demographics and technologies, polling science is evolving to keep pace. (Ramin Skibba, Nature)

Health Policy

Two HPV shots instead of three

Human papilloma virus (HPV) is responsible for about 5% of all cancers in the world, including 70% of throat, neck and oral cancers, and 90% of all anal cancers. Originally, an effective vaccine was approved in 2006 for a three-dose regimen to confer protection. Since then, clinical data reviewed has shown protective efficacy with only two doses in Costa Rica. The Advisory Committee on Immunizations Practices at the Centers for Disease Control and Prevention (CDC) has now recommended two doses of the vaccines for pre-teen boys and girls.

“The pediatricians and other people I talked to said the new recommendation is a game changer with that schedule,” said Kevin Ault, MD, professor of OBGYN at the University of Kansas Hospital. “It’ll make it easier for the doctors, easier for the parents and easier for the kids.”

This recommendation is very timely, and may boost vaccination rates, which have risen very slowly so far. Teen girls getting the vaccine only increased from 60% in 2014 to 62.8% in 2015. Doctors have been timid about promoting the shots with parents, who may not want to have discussions about their children having sex. A lighter vaccination schedule may help. Furthermore, it reduces cost significantly for implementing the vaccine in low and middle-income countries, and thus may greatly aid in curbing the global cancer burden. (Associated Press, STAT)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

October 21, 2016 at 9:00 am

Posted in Linkposts

Tagged with , , ,

Science Policy Around the Web – October 4, 2016

leave a comment »

By: Cheryl Smith, PhD

Source: Flickr, under Creative Commons

Health Policy

FDA approves first drug for Duchenne muscular dystrophy

The Food and Drug Administration (FDA) approved a drug, Exondys 51, to treat Duchenne muscular dystrophy, a rare, debilitative disease that destroys muscle and confines boys to wheelchairs and eventually death. The decision was made by the FDA in opposition to its own medical staffers who questioned the effectiveness of the drug. One of the key issues medical staffers were concerned about was whether the drug can produce a sufficient amount of a protein called dystrophin to reverse muscle damage and, as a consequence, overall mobility and strength.

However, patients and their families lobbied hard for drug approval. Laura McLinn, an Indiana mother whose 7-year-old son has Duchenne muscular dystrophy, was in tears Monday when she heard the news of the drug’s approval. “I’m really overwhelmed,” McLinn said. “We’ve been waiting a long time to hear this.”

In reaching its decision, the agency essentially overruled its own medical staffers, who earlier this year questioned the effectiveness of the drug over concerns about a small clinical trial. The wrangling raised still larger questions about standards for approving a drug, but some FDA officials also acknowledged that unmet medical needs for patients with some rare diseases warranted endorsement under a program known as accelerated approval. (Ed Silverman, Scientific American)

Biotechnology and Forensics

DNA breakthrough finally gives ‘a face to this crime.’ But can it solve a woman’s 1992 murder?

Lisa Ziegert was murdered in 1992 and her killer was not found, however, a sliver of her attacker’s DNA was recovered. But that DNA lead went cold – like all the other evidence in the case. Now, prosecutors say that the DNA left by Ms. Ziegert’s attacker has given them a new lead in the case as well as a face. The Reston-based company Parabon Nanolabs has developed a new technology that uses DNA to make predictions about the suspect’s ancestry, eye color, hair color, skin color, freckling, and face shape. The DNA technology uses these characteristics to reconstruct faces based on DNA characteristics.

In the past, DNA has typically been used as a biometric identifier capable of identifying individuals with great certainty. Now, this technology can literally put a face to a crime.

Ms. Ziegert’s killer, according to Parabon, was likely a man of European descent with hazel eyes and brown or black hair. For the first time in twenty-four years, we have a face to this crime,” Hampden District Attorney Anthony Gulluni said in a statement released Wednesday. “The technology we have put to use is at the leading edge of the industry. No expense, effort, or means will be spared to bring the person(s) to justice who killed Lisa. We will never forget her.” (Cleve R. Wootson Jr., The Washington Post)

Biomedical Research

Yoshinori Ohsumi of Japan wins Nobel prize for study of ‘self-eating’ cells

Dr. Yoshinori Ohsumi, a Japanese cell biologist, was awarded the Nobel Prize in Physiology or Medicine on October 3, 2016 for his discovery of autophagy – a Greek term for “self-eating”. It is a crucial process for cellular survival. During starvation, cells are able to break down proteins and reuse them for energy internally running their recycling plant for survival. Autophagy is also critical during infections and can serve to protect the cell by destroying invading viruses or bacteria and then sending them for recycling. Cells can also use autophagy to get rid of damaged protein structures. In diseases such as cancer, neurodegenerative disorders, or immunological diseases, autophagy is thought to be defective. The importance of this cellular recycling mechanism was not known until Dr. Ohsumi studied the process in baker’s yeast in the 1990s.

Dr. Ohsumi received his Ph.D. from the University of Tokyo in 1974 in molecular biology. His ‘unimpressive’ Ph.D. thesis made it difficult for him to find a job. His advisor suggested a postdoctoral position at Rockefeller University in New York where he was to study in vitro fertilization in mice. Because Dr. Ohsumi grew ‘very frustrated’ he switched to studying the duplication of DNA in yeast. This work led him to a junior professorship at the University of Tokyo where he began his autophagy work. Dr. Ohsumi later moved to the National Institute for Basic Biology, in Okazaki, and since 2009, has been a professor at the Tokyo Institute of Technology.

“All I can say is, it’s such an honor,” Dr. Ohsumi told reporters at the Tokyo Institute of Technology after learning he had been awarded the Nobel, according to the Japanese broadcaster NHK. “I’d like to tell young people that not all can be successful in science, but it’s important to rise to the challenge.” (Gina Kolata and Sewell Chan, New York Times)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

October 4, 2016 at 9:02 am

Science Policy Around the Web – September 30, 2016

leave a comment »

By: Jessica Hostetler, PhD

Source: Flickr, under Creative Commons

Human Genetic Manipulation

World’s first baby born with new “3 parent” technique

On September 27, 2016, the New Scientist reported the birth of a baby born with DNA from three people. The now five-month old healthy baby boy was born in New York to a Jordanian couple who had struggled for years to have a healthy child. The baby’s mother had genes for the lethal Leigh syndrome, a neurological disorder typically resulting in death in 1-3 years after birth, from which her first two children had died. These genes were carried in about 25% of her mitochondria, the energy producers for cells, which contain 37 genes separated from the thousands of other genes held inside the cell’s nucleus. Mitochondrial genes are only passed down from mothers through the mitochondria present in the mother’s egg before being fertilized by a father’s sperm.

The couple worked with US-based fertility expert John Zhang from the New Hope Fertility Center in New York City to undergo an approach for mitochondrial replacement therapy (MRT) called spindle nuclear transfer. Dr. Zhang transferred the nucleus of one of the mother’s eggs into a donor egg, which had the nucleus removed but contained healthy mitochondria. Several of these eggs were then fertilized with the father’s sperm to make 5 embryos with nuclear genes from both the father and the mother and mitochondria from the donor. The only healthy embryo was then implanted into the mother, and resulted in the birth of a healthy baby boy, with 99% healthy mitochondria.

This type of egg manipulation is now legal in the UK, though effectively banned in the US, so the team completed the fertility work in Mexico, which lacks clear regulations for the procedure. While several people such as Sian Harding who reviewed ethics for the UK guidelines, and legal scholar Rosario Isasi (from a Nature article), have acknowledged that Zhang’s group appears to have followed ethical guidelines, questions remain about the ethics, quality and safety of the technique.

The report was covered in a number of additional articles and commentaries, including in the New York Times, Science, and Nature. The commentaries note that researchers are eager for more information on a host of fronts such as the choice of using Mexico as the site of the work (as opposed to a more regulated and rigorous scientific environment) and the threshold of contaminating maternal mitochondria used in transfers (5%). These and other specifics are likely to come up when Dr. Zhang and team report on the case at the American Society for Reproductive Medicine meeting in October, 2016. (Jessica Hamzelou, New Scientist)

Health Policy

Why do obese patients get worse care? Many doctors don’t see past the fat

One in three Americans is obese; despite this fact, doctors and the healthcare system remain ill equipped in “attitudes, equipment and common practices” to treat obese patients. Beyond equipment issues, such as 90% of ERs and 80% of hospitals lacking M.R.I. machines built to accommodate very obese patients, research into bias against obese patients (both conscious and unconscious) shows that healthcare providers spend less time with such patients and refer them for fewer diagnostic tests. The same review reports that doctors feel less respect for obese patients and are more likely to stereotype them as “lazy, undisciplined and weak-willed,” all of which can negatively impact communication in the doctor-patient relationship, which in turn affects quality of care. In an effort to address the problem, the American Board of Obesity Medicine was founded to educate physicians about patient care and provide certification for achieving “competency in obesity care.”

Currently, these attitudes can lead health care providers to misdiagnose symptoms as being obesity-related instead of fully investigating other, potentially life threatening causes. Drug dosing may often be incorrect for obese people, particularly for cancer drug regimens for which obese individuals have worse outcomes across the board. Many orthopedists refuse joint hip and knee replacement surgery for obese patients unless they lose weight, though a review committee from the American Association of Hip and Knee Surgeons recommends a measured approach including options for surgery in some patients after the risks are discussed. The problems obese patients face may be exacerbated by the risk-averse hospital culture where adverse event scores affect Medicare reimbursements; thus pushing hospitals to avoid helping higher-risk patients. Beyond this there is a distinct lack of guidance from drug makers for correct dosing of anethesia drugs, with only a few examples, for instance a report from Dr. Hendrikus Lemmens out of Stanford University. Dr. Lemmens notes that 20-30% of obese-patient stays in intensive care after surgery are due to anesthetic complications and are likely frequently caused by drug dosing errors. Providing quality healthcare will likely only increase as the numbers of obese patients continue to increase in the US. (Gina Kolata, New York Times)

Have an interesting science policy link?  Share it in the comments!

Written by sciencepolicyforall

September 30, 2016 at 9:00 am