Tuesday, April 23, 2013

Women and children overboard



It's the Catch-22 of clinical trials: to protect pregnant women and children from the risks of untested drugs....we don't test drugs adequately for them.

In the last few decades, we've been more concerned about the harms of research than of inadequately tested treatments for everyone, in fact. But for "vulnerable populations," like pregnant women and children, the default was to exclude them.

And just in case any women might be, or might become, pregnant, it was often easier just to exclude us all from trials.

It got so bad, that by the late 1990s, the FDA realized regulations and more for pregnant women - and women generally - had to change. The NIH (National Institutes of Health) took action too. And so few drugs had enough safety and efficacy information for children that, even in official circles, children were being called "therapeutic orphans." Action began on that, too.

There is still a long way to go. But this month there was a sign that maybe times really are changing. The FDA approved Diclegis for nausea and vomiting in pregnancy. It's a new formulation of the key ingredients of Bendectin, the only other drug ever approved for that purpose in the USA. Nothing else has been shown to work.

Thirty years ago, the manufacturer withdrew Bendectin from the market because it was too expensive to keep defending it in the courts. It's a gripping story, involving the media, activists, junk science and some fraud. It had a major influence on clinical research, public opinion and more. You can read more about it in my guest blog at Scientific American, Catch-22, clinical trial edition: the double bind for women and children.

In dozens of court cases over Bendectin, judges and juries struggled with competing testimony about scientific evidence. In one hearing, a judge offered the unusual option of a "blue ribbon jury" or a "blue, blue ribbon jury": selecting only people who would be qualified to understand the complex testimony and issues of causation. The plaintiffs refused.

Ultimately, in one of the Bendectin cases, Daubert versus Merrell Dow Pharmaceuticals, the Supreme Court re-defined the rules around scientific evidence for US courts. The previous Frye Rule called for consensus. The 1972 Federal Rules of Evidence said "all relevant evidence is admissible."

The new Daubert standard determined that evidence must be "reliable" - grounded in "the methods and procedures of science" - not just relevant.

We still need everyone involved to better understand what reliable scientific evidence on clinical effects really means, though. You can read more about that here at Statistically Funny.


Tuesday, April 9, 2013

Look, Ma - straight A's!



Unfortunately, little Suzy isn't the only one falling for the temptation to dismiss or explain away inconvenient performance data. Healthcare is riddled with this, as people pick and choose studies that are easy to find or that prove their points.

In fact, most reviews of healthcare evidence don't go through the painstaking processes needed to systematically minimize bias and show a fair picture. You can read more about how it's done thoroughly in this explanation of systematic reviews at PubMed Health.

A fully systematic review very specifically lays out a question and how it's going to be answered. Then the researchers stick to that study plan, no matter how welcome or unwelcome the results. They go to great lengths to find the studies that have looked at their question, and they analyze the quality and meaning of what they find.

The researchers might do a meta-analysis - a statistical technique to combine the results of studies (explained here at Statistically Funny). But you can have a systematic review without a meta-analysis - and you can do a meta-analysis of a group of studies without doing a systematic review.

To help make it easier for people to sift out the fully systematic from the less thorough reviews, a group of us, led by Elaine Beller, have just published guidelines for abstracts of systematic reviews. It's part of the PRISMA Statement initiative to improve reporting of systematic reviews.

A quick way to find systematic reviews is the National Library of Medicine's PubMed Health. It's a one-stop shop of systematic reviews, information based on systematic reviews and key resources to help you understand clinical effectiveness research. You can read more about PubMed Health here.

Do systematic reviews entirely solve the problem Julie saw with those school grades? Unfortunately, not always. Many trials aren't even published at all, and no amount of searching or digging can get to them. This happens even when the trial has good news, but it happens more often with disappointing results. The "fails" can be very well-hidden. Yes, it's as bad as it sounds: Ben Goldacre explains the problem and its consequences here.

You can help by signing up to the All Trials campaign - please do, and encourage everyone you know to do it too. Healthcare interventions simply won't all be able to have reliable report cards until the trials are not just done, but easy to get at.


Interest declaration: I'm the editor of PubMed Health and on the editorial advisory board of PLOS Medicine.


Sunday, April 7, 2013

Don't worry ... it's just a standard deviation


Of course, every time Cynthia and Gregory make the 8-block downtown trip to the Stinsons, it's going to take a different amount of time, depending on traffic and so on - even if it only varies by a minute or two.

Most of the time, the trip to the Stinsons' apartment would take between 10 minutes (in the middle of the night) and 45 minutes (in peak hour). Giving a range like that is similar to the concept of a margin of error or confidence interval (explained here).

So what's a standard deviation and what does it tell you? Well, it's not a comment on Gregory's behavior! Deviance as a term for abnormal behavior is an invention of the 1940s and '50s. Standard deviation (or SD) is a statistical term first used in 1894 by one of the key figures in modern statistics, Karl Pearson.

The standard deviation shows how far results are from the mean (or average). The standard deviation will be bigger when the numbers are more spread out, and smaller when there's not a huge amount of difference.

Lots of results will cluster within 1 standard deviation of the mean, and most will be within 2 standard deviations. Roughly like this:





95% of results are going to be within 2 standard deviations in either direction from the mean.You can read about how 95% (or 0.05) came to have this significance here. Statistical significance is explained here at Statistically Funny.

From the standard deviation, it's just a hop, skip, and jump to the standardized mean difference! More about that and an introduction to the mean generally here at Statistically Funny.