Behind Closed Doors: IRBs and the Making of Ethical Research (Morality and Society Series)
University Of Chicago Press, 2012
240 pp., $34.00
Science in Focus: Robert Alan Greevy, Jr.
Behind Closed Doors, Part 3
For members of an Institutional Review Board (IRB), ethics is not a simple right vs. wrong affair. Clearly it is wrong to allow someone to get seriously injured, but is it wrong to allow someone a small probability of getting seriously injured if there is a decent probability of discovering lifesaving knowledge as a result? It depends. It depends on how serious the potential injuries are and how valuable the potential knowledge is. However, it also depends on how likely the injuries are and how likely it is the knowledge will be discovered. Ethical judgments are exercises in estimating probabilities. When making a decision about right and wrong, you will need a statistician.
In some ways, Laura Stark's Behind Closed Doors: IRBs and the Making of Ethical Research is about power: the abuse of power that led to African American men in Alabama being unknowingly infected with syphilis in the Tuskegee Syphilis Study; the power vested to the federal government by the National Research Act, which was passed in response to public outcry against the Tuskegee study and grants the government authority to regulate any research that involves human subjects; and especially the power that was given in turn to IRBs to enforce ethical practice in human subjects research. However, Behind Closed Doors is much more about expertise and the vital role that experts perform in protecting others through their role in IRBs. As a statistician and medical researcher, who is continually on the receiving end of IRB reviews, I found Stark's inside look into the workings of IRBs engaging and encouraging. I also found myself contemplating the vital role of statistical expertise in the review process and the intricacies of another type of power, "statistical power."
In discussing the function that the Belmont Report's three broad guiding principles play in the IRB's deliberative process, Stark highlights the third principle of weighing the risks and benefits of a study against each other. She notes the challenging nature of the task. It is rare to have an apples-to-apples comparison. Typically, the IRB must weigh many potential risks against a few potential benefits, where the risks and the benefits range from trivial to significant. "Potential" is a key word. A study with serious risks that are likely to occur will not be approved, but one with serious risks that are possible but unlikely may get approved. Again, it depends on the potential benefit of the study—in particular, how valuable the potential knowledge to be gained from the study is and how likely it is the study will generate that knowledge. "Statistical power" is the formal term for the probability a proposed study will discover the smallest clinically important effect, should that effect actually exist. If a study's statistical power is too low, even small risks may outweigh the potential benefits of the study.
There is as much art as science in good statistical power calculations. In collaboration with their research team, statisticians must envision an array of possible scenarios that could influence the ability of the study to detect an effect that the researchers are simply hypothesizing even exists. Some scenarios may be difficult to incorporate into power calculations and may require intricate simulation studies. Power calculations are exact calculations for very approximate settings. Without expertise, they can be difficult to interpret and their validity hard to determine.
In this regard, the IRB may benefit from the work of scrupulous funding agencies. Statistical power is not just an issue of ethical consideration, but one of financial consequence. A funding agency does not want to pour money into a study with little chance of detecting anything of interest. However, statistical expertise is a sparse commodity, which is a boon to those of us in the field but a challenge for funding and review agencies.
Human subjects research is rife with suboptimal study designs and analysis methods. Consider one of the key elements to many clinical trials, randomization. Randomization is an ethical conundrum itself. Is it ethical to randomly assign a patient to a treatment that has no chance of helping them, i.e., a placebo, so that we can learn about a treatment that could possibly help them down the road? The value of randomization in generating clear, useful knowledge is so great that its benefit commonly outweighs any ethical concerns. However, simple randomization, e.g., pulling names from a hat or assigning treatment by tossing a coin, is highly inefficient compared to restricted randomization, e.g., matching patients and randomizing within pairs or randomizing patients with a probability that changes to help ensure the two treatment arms are similar in terms of important characteristics. That is, a study can increase its power, sometime quite considerably, by using a modern randomization method. Simple randomization and other less efficient randomization methods are commonly used in spite of the fact that they are essentially throwing away statistical power. Using an inefficient randomization method is equivalent to recruiting and paying for a group of patients, putting all of those patients through the rigors of the study, and then throwing the data from a portion of those patients away. An IRB could require a study to use a modern randomization method at little to no cost and increase the study's power. To not do so when feasible is simply unethical.