Scrutiny beyond Big Pharma

Author(s)
Published on
August 24, 2011

We are all familiar with drug trials or disease screening controversies where the journalists reporting on studies fail to ask the hard questions and journals and authors stonewall criticisms. And since Mr. Heisel and Dr. Schwitzer have written so ably in those areas I defer to their posts. But, it happens in behavioral trials as well. Although corpses are much more dramatic, many times the potential for harm or lack of benefits is less obvious.

Regardless of the subject of the trial, the type of trial or which journal it was published in, the following are examples of reasonable questions to ask:

  • More and more journals and organizations are publishing and registering clinical trial protocols prior to the trial so that a comparison can be made to determine if the original protocols were followed or changed for example. In a 2008 study of RCTs published in the Lancet, 30% had major differences between the initial protocols and the subsequenct published results. The most common differences pertained to primary outcome reporting, with multiple studies either changing or not reporting the original primary outcome(s). Not everything is nefarious, but endpoints may be changed mid-study in order to make the proposed therapy appear more efficacious than it would have been if the originally proposed endpoints were used. This strategy is not limited to drug trials and if you don't read both the pre-trial protocols and the actual full study then it's difficult to ask the hard questions.
  • There is a big difference between self-report instruments often used in behavioral studies and using more objective means of measurement. As well, every instrument has drawbacks. How might the use of different instruments impact the results? Example: On self-report the patient may say they feel better (define better please) while objectively immune markers or some other form of objective biomedical measurement doesn't change. Or possibly patients who do not have objective changes are also the ones who did not respond well or at all to the intervention. You don't know if you don't ask – or measure.
  • Can the results from the defined population studied be extrapolated to very different groups and situations? Those familiar with journal clubs know this is one of the key questions explored. For example: Is studying peaches, calling them all apples and then extrapolating the findings to all round fruit problematic?
  • How do the standards used compare to those used in comparable groups? Example: Can findings in one neurological disease be extrapolated to patients with other neurological diseases? If measuring activity levels how does the studied group compare to another group. Say cardiac patients to the elderly?

Although the above is not a comprehensive list by any means, finally there conflicts of interest and how might they influence the research? Do the authors consult for companies such as Big Pharma, insurance or biotech companies? If they consult for disability insurance for example can they claim their intervention means patients are not disabled? If they do, insurance companies may stand to gain. Conflicts of interest are red flags to be reported not necessarily indictments. Nor, as pointed out by the Lancet and PLoS are conflicts of interest limited to financial conflicts. And sometimes bias is simply the result of being blinded by an idea or ideology. It's not a crime, but it will influence everything from the way the study is designed to the way patients are defined.