Saturday, December 15, 2018

Responsibility and Regulation Jessica Flanigan

Responsibility and Regulation
Jessica Flanigan
DOI:10.1093/oso/9780190684549.003.0004
Abstract and Keywords
The case against approval requirements is even stronger than the empirical record suggests. When patients die because they knowingly and willingly used a dangerous pharmaceutical, drug manufacturers are not culpable for patients’ deaths because patients consent to the risks associated with dangerous drugs. Yet when patients die because they were prohibited from accessing a drug, those who stand in their way are morally responsible for their deaths. This argument appeals to a moral distinction between killing and letting die, which marks out particular policies, those that kill people rather than allowing some to die, as especially unjust. If we accept the distinction between killing and letting die, we ought to conclude that public officials kill people by enforcing prescription requirements. This thesis lends further support to rights of self-medication, especially the right to try.

Keywords:   approval requirements, doing harm, allowing harm, moral risk, ideal theory, clinical trials, drug lag, right to try, letting die

Gideon Sofer was diagnosed with Crohn’s disease in 1996 when he was twelve years old. At the time he weighed only forty-five pounds and was severely malnourished. Following his diagnosis, he dedicated his life to educating people about the disease, raising money for Crohn’s research, and advocating on behalf of patients with inflammatory bowel diseases. Gideon also had other interests as well. He collected stamps, liked Bruce Springsteen, attended UC Berkeley, and hoped to become a lawyer.

By the time Gideon was twenty-two, he weighed one hundred pounds and half of his intestine had been removed. He tried several experimental therapies. One was prohibitively expensive, and his insurance didn’t pay for it. Then he enrolled in a clinical trial for an adult stem-cell therapy, but he believes he received the placebo treatment. In an editorial about his experiences accessing unapproved therapies Gideon wrote, “withholding a potential cure is just as bad—if not worse—than the potential death sentence of a serious illness.”1 He was frustrated that the FDA’s approval requirements prevented him from using a treatment that had the potential to alleviate his symptoms or extend his life when all other options had failed. Elsewhere, he said, “For people like me, for whom nothing has worked, access to new treatments is absolutely critical … it’s the only thing that keeps me hopeful, that keeps me living.”2

Gideon never gained access to the experimental treatment. He died in 2011 when he was twenty-six years old. Thousands of patients like Gideon face similar barriers to access. Yet people generally support the FDA’s policy of barring access to investigational drugs. In the previous chapter, I showed that prohibitive testing requirements, especially in their current form, couldn’t be justified on the grounds that they promote the public’s health. People like Gideon die waiting for approval, and manufacturers are discouraged from investing in new treatments (p.109) because the approval process is so expensive. Despite these health effects of premarket testing requirements, one may still defend testing requirements. Such an argument may go like this. If killing someone is generally morally worse than allowing someone to die, then perhaps approval requirements could be justified on the grounds that they are necessary to prevent manufacturers from killing patients, even if they allow other patients to die.

It may seem that patients like Gideon must bear the costs of approval requirements that benefit all citizens on balance. But, as I argued in the previous chapter, there is substantial evidence that approval requirements do not benefit citizens on balance, because patients are prevented from purchasing and using drugs and also because prohibitive requirements deter innovation.

In this chapter, I argue further that the case against approval requirements is even stronger than the empirical record suggests. When patients die because they knowingly and willingly used a dangerous pharmaceutical, drug manufacturers are not culpable for patients’ deaths because patients consent to the risks associated with dangerous drugs. Yet when patients die because they were prohibited from accessing a drug, those who stand in their way are morally responsible for their deaths.

Public officials make many life or death decisions, such as highway construction, airport screening, or the allocation of scarce resources. But there is an important moral distinction between killing and letting die, which marks out particular policies, those that kill people rather than allowing some to die, as especially unjust. If we accept the distinction between killing and letting die, we ought to conclude that public officials kill people by enforcing prescription requirements. This thesis lends further support to rights of self-medication, especially the right to try.

4.1 How Regulation Kills
In some cases public officials may justifiably allow some people to die while preventing the deaths of others. For example by allocating funds to treatment for younger patients, an official may allow older patients to die. Such policy choices may be justified by an appeal to fairness (e.g., younger patients have had fewer years of life already) or overall welfare (e.g., younger patients have more good years ahead of them if they are cured). I have argued that these justifications generally fail to justify approval policies that cause patients to die for the sake of medical research or broader public health goals because patients are relatively worse off than other citizens and because the empirical record suggests that premarket efficacy-testing requirements at least cost more lives than they save by deterring innovation and preventing patients from accessing beneficial therapies. But even if proponents of approval policies could establish that the (p.110) burdens of premarket testing requirements were fairly distributed and that they promoted overall welfare, they would still be unjust because unlike policies that allow some to die while allocating lifesaving resources to others, public officials kill people when they prohibit the sale of investigational drugs.

The moral distinction between killing and letting die distinguishes prohibitive premarket approval policies as especially unjust—even worse than the empirical record suggests.3 While it may be acceptable for officials to make life and death decisions when deciding how to allocate resources, officials are not authorized to kill some citizens for the sake of broader public health goals. It would be wrong, for example, for officials to select some citizens by lottery to be killed for the sake of medical research or organ redistribution, even if such a system promoted public health and overall welfare on balance. Public officials who enforced such policies would be morally blameworthy for the death and suffering they caused. For the same reasons, officials who kill patients by enforcing approval requirements are blameworthy for the death and suffering they cause, and would still be blameworthy even if approval requirements did save lives on balance (contrary to available evidence).

I should specify what I mean by “killing” when I say that public officials kill people when they enforce premarket approval requirements. I mean that officials’ actions meet the conditions that distinguish other actions that constitute killing from instances of allowing people to die. Though it may appear that public officials merely allow people to die from diseases when they enforce regulations that prevent patients from accessing experimental drugs, when these patients die from diseases, it is because they were killed by pharmaceutical regulators. This argument relies on the moral distinction between killing and letting die, which is grounded in a more fundamental set of moral commitments.4 It may initially seem that the distinction between killing and letting die is descriptive, that killing simply consists in initiating a deadly sequence of events whereas letting die consists in failing to stop a deadly sequence of events. But a solely descriptive take on the distinction encounters challenging counterexamples, such as cases of killing through inaction. When a nurse deliberately fails to feed the incapacitated patients under his care, he kills them.5 Nor can a theory of (p.111) intention or bodily action explain the distinction since a person may intentionally use her body to withdraw aid from another person, thereby letting him die.6 And sometimes failures to intentionally act are cases of killing. For example, imagine your car is coasting down a hill toward a crosswalk full of people. You do not press the breaks, and several people are run over by your car. In this case, you killed the pedestrians even though you did not intentionally act by steering your car into them.

Descriptive approaches to the distinction between killing and letting die fail because the distinction is based on more fundamental moral distinctions, such as the relative weightiness of negative rights and judgments of responsibility. It is generally worse to kill or harm a person than to allow her to suffer or die because it is worse to violate negative rights than positive rights and because we are generally more responsible for what we do than what we allow. On this reconstruction, seeming instances of killing reliably align with violations of negative rights for which someone is morally responsible, whereas cases of letting die reliably align with failures to satisfy someone’s positive rights to assistance.7

There are three compelling reasons to accept a moral distinction between killing and letting die. First, negative rights violations are generally morally worse than positive rights violations. Warren Quinn develops this argument on the grounds that to deny a distinction between doing and allowing is to embrace an unacceptable moral theory.8 Without a distinction between negative rights (p.112) (against being harmed) and positive rights (against suffering) our bodies and projects would be fully subject to the community’s cost-benefit analysis. If harming you led to less suffering overall, then morality would require that you be harmed. At least in principle, act consequentialists believe this. But most of us think that we have a special claim to control our bodies, even if exercising that control is in some sense worse for everyone. Our claims to control our bodies may be justified because each person has a kind of inviolable dignity, or bodily rights could be justified by a more pluralistic commonsense moral theory. In any case, this special claim explains the greater weight of negative rights; they protect our unique entitlement to control our own lives and our own bodies in addition to our entitlement to escape death or suffering, whereas positive rights only protect the latter.

A second reason that denying the distinction between killing and letting die is unsustainable is that our moral concepts, such as our concept of responsibility, presuppose that what people do is morally different from what people allow. For example, Samuel Scheffler argues that if we are going to have a moral theory that assigns responsibility to people at all, it must recognize that it is morally worse to do something wrong than to allow something bad to happen.9 Responsibility is a way of assigning praise or blame to distinctive persons, so we must assume that people are distinctive in their role of making the world better or worse by distinguishing between what people do and what they allow.10

A third reason to maintain a distinction between killing and letting die is that denying the distinction would be unfair. A moral theory should give each member of the moral community equal standing. One way to interpret this is that each person should be subject to the same moral requirements and should (p.113) have equal rights. But a moral theory that denied the distinction between killing and letting die would disproportionately burden people who were well placed to prevent the deaths of others relative to those who were not well placed to prevent other’s deaths.11 If letting die were as morally serious as killing, then people would be as blameworthy for the deaths that could have been avoided merely due to their circumstances as they would be for deaths that could have been avoided had they not chose to kill.

For these reasons, the distinction between killing and letting die plays two important roles. First, the distinction marks out some actions as especially wrong or unjust because it tracks a more fundamental moral asymmetry between violations of negative rights and positive rights. In this way, the distinction ought to inform our moral assessments of different policy proposals. It may explain, for example, why government policies, such as the Transportation Security Administration’s airport checkpoints are less morally objectionable than a policy that confiscated some people’s vital organs to save other’s lives would be, even if the TSA’s policy caused more deaths than they prevented while the organ confiscation program prevented more deaths than it caused. If we accept the distinction between killing and letting die, the TSA would fare better in our moral assessment because even if more people choose to drive to avoid checkpoints, thereby causing more auto fatalities, the victims generally consented to the risks of driving but could not consent to the risks of being killed for the sake of one’s organs.

Second, the distinction is important for assigning judgments of culpability, and perhaps blame, sanctioning attitudes, apologies, compensation, or punishment (for those who think that punishment should align with desert or wrongdoing). Those who kill are liable to be sanctioned to a greater extent than those who allow the deaths of others. To see why this is true, consider what it would mean if this idea were false. If killing was not worse than letting die, then failures to provide lifesaving assistance would, in principle (setting aside negative externalities), be as much a blemish on one’s character as murder. No one sincerely believes this. As Francis Kamm argues, though philosophers have questioned the distinction between doing and allowing, even those who deny the distinction must acknowledge that we would think very differently of someone who killed a person to save $1,000 and someone who failed to give $1,000 to a charity that saves lives.12 We are forgiving of people who believe that giving to charities would save lives but fail to do so because they are weak-willed, but we would not be so forgiving of a person who cited his weak will when explaining (p.114) why he killed a person to save $1,000. Instead, killers are usually liable to be punished and may reasonably be expected to apologize and pay compensation to their victims’ families.

Turning to approval requirements, when public officials withhold access to potentially lifesaving or therapeutic investigational drugs, they kill those patients who die as a result of premarket approval requirements by the criteria that ground the distinction between killing and letting die. To establish this claim on more solid ground, I will need to show that public officials who enforce premarket approval policies violate rights in ways for which they are morally responsible. To show that officials are responsible, I will need to show that when they violate people’s rights not to be killed, it is due to their choices and also that they can and do foresee the deadly consequences of their choices.

Begin with the claim that regulators violate rights by forbidding unauthorized access to investigational drugs. As I previously argued, public officials violate people’s rights of self-medication and self-preservation when they prohibit voluntary exchanges between patients and manufacturers in order to enforce premarket approval policies. More generally, public officials violate rights because no one can consent to the approval requirements that prevent them from accessing investigational drugs.

The reasons in favor of respecting people’s rights to access investigational drugs are especially strong because patients who seek access are often those with few or no other medical options. Patients with conditions that can be effectively treated with approved therapies are likely to use approved treatments, even if they had access to investigational therapies, because unapproved treatments are risky and potentially ineffective. Patients with treatment-resistant terminal or degenerative medical conditions seek access to investigational drugs when it is more risky to wait for approval than to use an untested drug. In these cases, rights of self-medication are also a species of rights of self-preservation.13 And even the most Hobbesian proponents of political authority must concede that if people have any rights against public officials, those rights include the right to preserve one’s own life.

To illustrate this point as it relates to the distinction between killing and letting die, consider an analogy. If an oppressive government appropriates and rations the food supply in a way that causes mass famine, the government officials kill citizens. Officials are culpable when citizens suffer and ultimately die of starvation, even if the rations were not enforced with the goal of causing starvation, because they violated citizens’ rights to purchase food from willing (p.115) providers. In this case, as in the case of pharmaceutical regulation, prohibiting people from accessing lifesaving goods is best understood as an act of killing, rather than letting die, because prohibitions violate people’s rights of self-preservation.

There are countless examples of patients whose rights to preserve their lives were violated because they died waiting for drug approval. Here are two. First, Abigail Burroughs was twenty-one when she died of cancer. In the last stages of treatment, Abigail’s oncologist suggested that an unapproved drug, cetuximab, might treat the kind of cancer cells that were killing her. At the time, cetuximab was available only in clinical trials for colon cancer, so only colon cancer patients had access to the drug. Abigail had head and neck cancer. Abigail died in 2001, and in 2006, cetuximab was approved for treating head and neck cancer.14 Second, seventeen-year-old Adam Askew died of veno-occlusive liver disease in 2008.15 Before his death, Adam’s physician, Dr. Jody Sima, believed that the unapproved drug defibrotide could be used to cure his condition. Dr. Sima based this judgment on published studies of the drug, but FDA regulations did not allow Adam to enter a clinical trial for defibrotide because his symptoms were not severe enough to provide relevant data for the trial. A study released in 2009 confirmed that defibrotide effectively cures 36 percent of younger patients with less severe symptoms—patients like Adam.

Abigail and Adam were denied access to drugs that could have saved or extended their lives. Their stories illustrate how some patients have exceptionally strong rights to access investigational therapies in virtue of their right to preserve their lives through non-lethal means. One may reply that there are other patients who, like Abigail and Adam, suffered from life-threatening illnesses but would have been harmed by using investigational drugs that caused painful side effects or shortened their lives. I grant this point, but it is not as (if at all) wrong for manufacturers to sell dangerous investigational drugs as it is for officials to prohibit people from using dangerous investigational drugs. The following two cases, which are analogous to existing approval policies and a certification system, illustrate this point:

Approval: Patty, Peter, and four others lie in a hospital room dying from a disease. They invite Mark to their room to administer a risky drug. Mark tells them that little is known about the drug, and it is expected to cause one in six patients who use it to die. But the drug is also expected to cure one in six patients who use it. If Mark delivers the drug, it will in fact cause Patty to die and it will cure Peter. While Mark is en route, (p.116) Gloria intervenes and delays Mark’s visit. Patty, Peter, and four others die of their diseases.

Certification: Patty, Peter, and four others lie in a hospital room dying from a disease. They invite Mark to their room to dispense a risky treatment that must be given to all six patients at once. Gloria confirms Mark’s claims that little is known about the drug, and it is expected to kill one in six patients who use it and cure one in six patients who use it. Mark delivers the drug to the consenting patients. The drug causes Patty to die and cures Peter. The four others die of their diseases.

These cases illuminate the crucial moral distinction between the actions of manufacturers and regulators. Patients can consent to take a risky drug, but they cannot consent to the risky regulations that endanger their lives.16 When regulators interfere with transactions between patients and manufacturers, they kill those patients who could benefit. In Approval, Gloria kills Peter by preventing him from accessing the cure. She also harms Patty and the others by depriving them of the one in six chance of being cured. In contrast, when manufacturers distribute dangerous and untested drugs that harm patients, as Mark does in Certification, their actions do not violate rights because patients like Patty consent to the risks associated with using an investigational drug.

In Approval, Gloria also impermissibly interferes with Mark by preventing him from distributing the drugs, because Mark’s actions are permissible, and so Mark is not liable to be interfered with. One may question the claim that Mark’s actions are permissible and that Mark is not liable to be interfered with on the grounds that consent is not sufficient to establish that a person has assumed certain risks. For example, a person may consent to the risks associated with a mountain biking tour but not consent to the risks associated with the tour company’s failure to maintain its equipment. In such a case, the negligent tour company would be liable to be interfered with on behalf of its customers, even if the customers did consent to some of the risks of the tour. But in Mark’s case, we are assuming that Mark fully discloses all known risks associated with a drug and that there are not additional risks that are a result of Mark’s negligence. So Mark’s conduct is more like the conduct of a non-negligent tour company, meaning that the patients are able to give their informed consent.17

(p.117) Another important feature of these cases is that Gloria actively intervenes with Mark and the patients. This is significant because another reason that killing is generally worse than letting people die is that killing generally involves a knowing decision to cause another person to suffer and die. By preventing manufacturers from providing therapeutic drugs, public officials enforce a policy that they know will cost lives. Every year, patients suffer and die after being denied access to investigational drugs that are subsequently approved to treat their conditions. Whenever a regulatory agency or drug company claims a newly approved drug will save thousands of life-years for each year on the market, they concede that thousands of life-years were lost during the time it took for the drug to win approval. Regulators cannot take credit for their role in facilitating informed access to lifesaving drugs without also taking blame for the lives that were lost because of approval delays.

Pharmaceutical regulators do take some steps to minimize the harm of preventing people from accessing investigational therapies in some cases. For example, clinical trials are governed by the standard of clinical equipoise, which requires that investigators halt a trial or provide the superior treatment to all patients once the trial results pass a threshold of evidence about the relative merits of the control and treatment. The principle of equipoise aims to mitigate the harm of being deprived beneficial treatment for the sake of a clinical investigation. Patients are entitled to treatment that meets standards of equipoise if the patient consented to the risk of receiving the inferior treatment by enrolling in the trial. Yet by this logic, patients who are excluded from trials for investigational drugs have an even stronger claim to receive treatment because they cannot consent to the risks of being denied a superior treatment the way trial participants do. But regulators currently force these patients to accept the standard of care until the drug is approved, even if it is clearly inferior to an investigational drug.

Elsewhere, agencies have taken steps to provide greater incentives to develop drugs for rare diseases and to speed the approval policies, by enforcing policies like the ODA and PDUFA. Their compliance with these policies further suggests that public officials are aware of the deadly effects of enforcing expensive and lengthy approval policies. So regulators recognize that their actions are harmful but nevertheless do not take steps to avoid them.

This is not to suggest that pharmaceutical regulators intend to kill patients by withholding approval for drugs. But whether officials intend to kill patients is extraneous to the permissibility of killing, as it is in other contexts.18 If a military leader kills dozens of innocent civilians by bombing a bridge that is used by his enemies, he is morally responsible for the civilians’ deaths, even (p.118) though he did not directly intend to kill them. Assuming that refugees have rights to asylum, if a border patrolman turns away a boat full of refugees and they die in the sea, then the patrolman has killed the refugees, even if he only intended to prevent them from landing on the shores of his country. So too, when regulators prohibit patients from accessing lifesaving therapeutics, they kill patients, even if they intend to minimize the number of dangerous drugs on the market.

In summary, existing pharmaceutical regulations are morally worse than the empirical record suggests. Even if premarket approval policies prevented more deaths than they caused, they still would not be justified because the prohibition of investigational drugs violates patients’ fundamental rights and is not necessary to promote informed patient choice. For this reason, premarket approval requirements are not only deadly but also unjust, and insofar as there is a duty to resist injustice and reform institutions, citizens and officials ought to resist and reform pharmaceutical regulations. Since regulators are morally responsible for the deaths they knowingly cause by enforcing approval requirements, they are liable to be blamed and sanctioned when they kill patients. This argument calls for apology, compensation, and reparation in these cases as well.

4.2 Necessity and the Need to Test
Some people think that public officials may justifiably kill people in the service of a just cause, as long as killing is necessary for the cause and would effectively advance the cause. For the sake of argument, say we granted that in principle public officials could sometimes use their power to prevent people from using dangerous drugs for the sake of public health. Even granting this premise, prohibitive approval requirements would be neither necessary nor effective at promoting public health, so they would not be justified in practice. The requirements are unnecessary because public officials could certify drugs and subsidize access to expert advisors instead of prohibiting unapproved drugs. The requirements are ineffective because they likely cause more harm than they prevent by causing drug loss and drug lag.

One may reply that approval requirements are necessary for the promotion of the public’s health and safety when we consider health and safety more broadly. Though prohibitive approval requirements are not necessary to protect patients from using dangerous pharmaceuticals, they may be necessary for the sake of broader public health goals if granting patients the rights to use investigational drugs before they were approved would discourage them from participating in clinical trials. If so, rights of self-medication could violate the public health imperative to promote informed self-medication by threatening the clinical trial system.

(p.119) In the United States, courts have been reluctant to acknowledge that rights of self-preservation require a right to access for this reason.19 Before Abigail died, she and her father, Frank Burroughs, founded the Abigail Alliance for Better Access to Developmental Drugs. After her death, Frank sued the FDA and the Department of Health and Human Services on her behalf, alleging that the failure to permit terminally ill patients to access experimental drugs violated the fundamental right to preserve one’s own life. In 2004 a DC district court ruled against the Abigail Alliance, but in 2006 a three-judge panel of the DC Circuit Court of Appeals overturned the ruling and found that terminally ill patients do have a constitutional right to purchase experimental medicines that had successfully passed Phase 1 safety testing.20 The judges affirmed the argument that public officials violate patients’ rights by preventing them from using potentially lifesaving medication. The court wrote that these approval policies “impinge[d]‌ upon an individual liberty deeply rooted in our Nation’s history and tradition of self-preservation.”21 The FDA responded to this ruling by petitioning the Court of Appeals to rehear the case en banc, meaning that all judges in the DC appellate circuit were asked to rehear the case. There, a divided court denied that access to experimental drugs was a fundamental right and overturned the previous ruling.22

In part, the en banc court’s reversal was justified by the FDA’s assertion that greater patient access would undermine the clinical trial system. The claim was that if people could use investigational drugs outside the context of a clinical trial, there would be no incentive for them to enroll in clinical trials and risk receiving the standard of care. Call this the “need to test” argument. This argument relies on two premises. First, it assumes that granting patients access to investigational therapies would necessarily undermine the clinical trial system. Second, the argument assumes that public officials act permissibly when they prevent patients from using investigational drugs out of concern for the clinical trial system.

Would an expansion of access compromise the approval process? Not necessarily. The claim is that public officials and researchers would be unable to use randomized trials to test a drug’s effectiveness if patients insisted on using unapproved drugs. But as Eugene Volokh has argued, this argument cannot (p.120) justify policies that prohibit patients who do not qualify for enrollment in clinical trials from accessing investigational therapies. Volokh writes:

If the studies require 200 patients, and there are 10,000 who seek the experimental therapy, there is little reason to constrain the self-defense rights of all 10,000. Likewise, if the drug is now being studied only on people who suffer from a particular kind or stage of a disease, the drug should not be legally barred to those who fall outside those studies. If we must strip people of self-defense rights to save many others’ lives in the future, we should impose this tragic constraint on as few people as possible and to as small an extent as possible.23

The approach to clinical trials that Volokh describes has already been deployed for some drugs. For example, during the AIDS epidemic, patients who did not qualify for enrollment in clinical trials were permitted to access investigational drugs on a parallel track. Manufacturers and public officials may even have reason to favor a system that allows greater access. Manufacturers can monitor patients on parallel tracks to learn about other potential uses of investigational drugs. Public officials can monitor patients on parallel tracks to learn more about the effects of the drugs for different populations.

More generally, although approval requirements may give patients incentives to enroll in clinical trials, they are not necessary to encourage enrollment. Patients in clinical trials already receive other benefits by enrolling, such as subsidized medical care and careful monitoring, which go beyond early access to investigational therapies. Nevertheless, a prominent concern about expanding access to investigational drugs is that patients who are not effectively treated by available therapies will access the drugs rather than risk assignment to a control group in a clinical trial.

Insofar as concerns about incentivizing trial participation are valid, it is because some patients will judge that the risk of assignment to a control group is genuinely worse than access to an investigational drug with unknown effects. Yet in these cases, it is not clear that a trial would be ethical in the first place. Reconsider the principle of equipoise, which requires researchers to remain genuinely uncertain about whether the investigational therapy is better than the standard of care as they conduct a trial. As it becomes clear that an investigational therapy is more harmful than the standard of care, researchers are required to move trial participants out of the treatment arm. Or if it becomes clear that an investigational therapy is more effective than the standard of care, researchers are required to move participants into the treatment arm of the trial.

(p.121) The principle of equipoise is justified because patients do not have a choice about enrolling in a treatment or a control arm of a trial, so researchers have a duty to ensure that patients are not forced to accept worse care because of their participation in the trial. Yet the argument that patients must be prohibited from accessing investigational drugs because otherwise they would do so illustrates that the principle of equipoise is not met in cases where patients judge that the risks of being in the control arm are clearly higher than the risks of an investigational drug. In other words, if patients only participate in a clinical trial because they have no chance of accessing an investigational treatment otherwise, that is a sign that the trial may be violating standards of clinical equipoise.

While I agree that randomized clinical trials are generally the best method for medical research because randomization can mitigate selection bias, it is also important to remember that researchers can learn about the effects of investigational drugs without random assignment. For new drugs being tested against the standard of care, existing patient populations can serve as a control group, even if most patients flock to use an investigational therapy because the effects of the existing standard of care are already known. For example, instead of looking at a particular subset of patients who are randomly assigned to a treatment at a particular time, researchers can treat the advent of availability as an instrumental variable and compare patients who are treated for a condition at a particular time to those who had the condition before the treatment and received the standard of care.

The success of the off-label market provides further reason to think that efficacy-testing requirements are not necessary for researchers to learn about new drugs. Since the need to recruit patients for clinical trials for every particular condition that a drug could treat is not weighty enough to justify prohibiting the off-label prescription of drugs, the need to test also cannot justify prohibitive approval requirements.

Relatedly, many randomized clinical trials today occur after approval when patients have access to the drugs being tested.24 Researchers use post-approval trial data to find new uses for available drugs.25 Cancer researchers investigate whether available compounds can effectively treat different kinds of cancers.26 Investigators are also testing whether statins, which are currently approved to prevent heart attacks and strokes for high-risk patients, could also protect low-risk patients.27 The Women’s Health Initiative (WHI) is another example of (p.122) post-market trials that tested the efficacy of an available treatment.28 Before the WHI, anecdotal evidence strongly suggested that hormone replacement therapy (HRT) could reduce heart attacks for women. But the WHI conducted randomized clinical trials of the therapy. Though HRT was approved and widely available and prescribed off-label, the WHI successfully enrolled patients in the trial. The results showed that HRT actually increased women’s risks of heart attack and other ailments, effectively changing the standard of care through post-market testing.

Trials like these include tens of thousands of patients. In each trial, some participants receive the standard of care and some receive an approved drug that may or may not effectively treat their condition. Yet, despite the fact that both the standard of care and the investigational therapy are available, patients enroll in trials to receive other benefits of clinical trials, such as close monitoring and high-quality subsidized medical care. These examples suggest that patients may participate in trials even if they can access investigational therapies.

The argument that approval policies are necessary because patients would otherwise not have sufficient incentives to participate in clinical trials also assumes that the only possible incentive that researchers could offer is access to an investigational drug. Yet researchers are also permitted to pay healthy subjects to participate in clinical trials. Currently, researchers in the United States, Canada, and Europe are discouraged from paying participants, particularly those viewed as especially vulnerable, and it is frowned on to offer large payments to induce participation, as opposed to fairly compensating participants for their time.29 But policies that prohibit inducement in research out of concern for the poor and vulnerable only further limit the options of those who are already among the worst off.30

Some bioethicists object that even if payment is good for individual participants, there is “something repugnant” about normalization of financial inducement insofar as it will effectively mean that the bodies of the poorest and worst off are used for the benefit of the well off.31 But consider the alternative. Currently, the worst off cannot access investigational drugs, partly out of concern that they lack incentives to participate in the clinical trial system. So their rights of self-medication and self-preservation are violated in order to induce them to participate in research. In light of that practice, it is certainly not worse to induce people to participate in research by paying them, when paying (p.123) them does not violate their rights or make individual participants worse off. The fact that more people who are among the economically and medically worst off would bear the risks of medical research is a symptom of socioeconomic inequality, due to the fact that for many people participation in research is a relatively good option even though it is risky. The appropriate response to socioeconomic inequality should not be to mask the symptoms of it by further limiting the options of the worst off. Rather, policymakers and researchers should be encouraged to give patients more incentives to participate in research, thus providing them more choices.

One may object that greater access to experimental drugs would at least make it more difficult to recruit participants for a randomized trial of a drug that treats rare diseases. When there is a small patient population, it is difficult to establish that a drug is effective because all trials are necessarily under-powered. On the other hand, rare-disease communities can also serve as a resource for recruiting trial participants and educating people about the benefits of a clinical trial. In some cases, patients with rare diseases have been given compassionate access to experimental therapies but opted to participate in clinical trials instead so their participation could benefit the broader patient population and to receive better care.32

These considerations suggest that it is not necessary to prevent people from accessing drugs outside the context of a clinical trial in order to learn about the effects of investigational therapies. But even if policies that prevented patients from accessing investigational drugs did help researchers learn about the effects of new drugs, it wouldn’t justify approval requirements. The costs associated with respecting patients’ rights of self-medication and self-preservation do not necessarily justify violations of those rights.

For example, Udo Schuklenk has argued forcefully that terminally ill AIDS patients have rights to access experimental medicines even if there are serious costs to third parties.33 People whose lives are threatened are often permitted to act in ways that impose costs on others if doing so would preserve their own lives.34 Some philosophers argue that even lethal force is permissible for (p.124) self-preservation, yet patients who want to use investigational drugs are prohibited from self-medicating because it could make it more difficult for researchers to test new drugs. Insofar as people can do much more harmful and morally fraught things out of self-defense, surely they can access investigational drugs. Therefore, the right of self-preservation is so powerful that healthcare providers and citizens might be asked to absorb the costs of allowing people to use investigational drugs, in the same way that society bears the costs of other basic rights.

This argument is further supported by an appeal to the distinction between doing and allowing. Even if rights of self-medication made it more difficult for researchers to conduct clinical trials, public officials are not generally morally permitted to kill some citizens to help researchers gather information that is useful for promoting public health. Abigail and Adam may have been able to save or extend their lives by using investigational drugs, and the FDA stopped them. If this kind of government interference were generally allowed, it would have unacceptable implications elsewhere. For example, it would be wrong for public officials to conduct deadly medical experiments on non-consenting human subjects, even if the experiments only involved blocking access to lifesaving goods, like food or oxygen, and even if the experiments yielded very useful information. Or if rights of self-medication could be suspended to facilitate research, then officials could in principle rescind approval for existing drugs to encourage people to participate in additional trials.

Finally, clinical trials may improve if more people have access to investigational drugs. One explanation for the apparent ineffectiveness of premarket safety testing is that premarket tests only establish safety for a specific population, while in practice the safety of a drug varies substantially between patients. Drugs that are safe in healthy patients may be unsafe in unhealthy patients. Drugs that are safe for younger populations might have unacceptable side effects for older patients. The side effects and benefits of a drug also vary across populations. Even for safe drugs that are never recalled, hundreds of thousands of patients suffer or die each year from adverse reactions or suffer side effects without receiving any benefit. According to one pharmaceutical executive’s estimate, as many as 90 percent of drugs are only effective in 50 percent of cases.35 This is not a failure of safety testing; it is a necessary limitation. As former FDA Commissioner David Kessler said in his testimony to the Congress, premarket clinical trials cannot generate enough data about drugs to anticipate rare but serious adverse reactions or the long-term risks of a drug.36 Clinical (p.125) trials are also very different from typical use conditions, and typical use data may be more relevant to patients than the results of a supervised trial. Therefore, insofar as premarket approval requirements are justified for the sake of learning as much as possible about new drugs, expanding the group of users and conducting post-market surveillance for drugs would generate more and better information than a premarket clinical trial that only includes one patient-type using a drug in conditions that are very different from what will be a typical user’s experience.

4.3 Non-Ideal and Ideal Theory
At this point, one may object that I am holding my argument for rights of self-medication and certificatory policies to a double standard by comparing an ideal system of self-medication to a clearly flawed status quo. Perhaps there could be a better regulatory middle-ground in theory, and in practice rights of self-medication would be a disaster.37 To address this objection, I will compare a certificatory system with an approval system in light of non-ideal considerations, and then compare prohibitive approval requirements with a certification system in ideal theory. In doing so, I will demonstrate that the case for rights of self-medication is even stronger than it may initially seem. In light of non-ideal considerations, regulators and citizens face substantial institutional and psychological barriers to addressing the harmful effects of the approval system. In ideal theory, prohibitive approval requirements are unnecessary because public officials could encourage safe and responsible self-medication through certification and incentive programs instead of prohibition.

We can distinguish non-ideal theory from ideal theory in three ways.38 First, non-ideal theories consider whether making an institution more just is feasible in light of psychological, social, and physical facts. It is also important to clarify which psychological, social, and physical facts are fixed and which are likely to change as a result of institutional changes. So, for example, it may be a psychological fact that patients have poor medical literacy under the current system, but we should not assume that a system that empowered people and gave them greater responsibility for their health would not influence medical literacy. Second, non-ideal theories account for people’s likely non-compliance with principles of justice, either due to personal prejudices or institutional incentives. Third, non-ideal theories advocate transitional efforts at making institutions more just rather than exclusively focusing on an ideal or end-state of justice. For example, non-ideal theorists must also consider whether the ideal policy is (p.126) morally inaccessible because achieving it would involve unacceptable violations of rights or very bad consequences.

To evaluate the merits of a prohibitive approval system against the merits of a certificatory approach therefore requires comparing certification and approval policies in light of the extent that they can meet their goals considering each institution’s feasibility, people’s likely non-compliance, and the possibility that officials sometimes face a choice between advocating a more just system by acting unjustly and acting justly while setting back efforts to create a more just system on balance.

Begin by comparing approval policy reforms with certification policies in light of the various ways that a just approval or certificatory may be psychologically infeasible or infeasible in light of social and institutional facts. A more just approval system would have shorter delays, more effective drug screening, expanded access to lifesaving drugs for patients who do not qualify for clinical trials, and ways of promoting innovation despite the risk that a drug will not win approval. But it is not clear that a more just system is psychologically feasible because, in any system of approval, public officials and voters will be influenced by pervasive psychological biases. In particular, officials face institutional incentives to delay the introduction of potentially lifesaving drugs, and voters are biased to punish officials for drug poisonings but not for the deaths that are caused by drug lag and drug loss.

The first psychological barrier to a more just approval system stems from voters’ tendency to make judgments based on an availability heuristic, which undermines people’s ability to assess risk. And since drug recalls and the dangerous effects of drug use are well publicized, people assume that pharmaceuticals are generally very risky. But the deaths that are caused by approval requirements are rarely publicized, so voters do not recognize the risks of an approval system. The second psychological barrier to a more just approval policy relates to the first. Any approval policy that tasks public officials with certifying drugs as safe and effective will create incentives for officials to enforce deadly delays in order to meet their charge. While manufacturers have enormous self-interested reasons to avoid causing patients’ deaths by selling dangerous drugs, the same cannot be said for regulators who enforce prohibitive regulations. For example, in the United States the FDA’s regulatory authority derives from the legislature, and legislators are themselves accountable to the public. Political scientists who study the FDA argue that the agency has a great deal of independence and power because it maintains a good reputation with the public.39 Yet because the FDA is so reliant on legislative and public support, the agency has incentives to craft its approval policies in anticipation of potential public (p.127) backlash and sanction from elected officials. The seeming independence of regulatory agencies is an illusion insofar as regulators make decisions in order to avoid public criticism.

Because regulators are reliant on public and legislative approval, they are influenced in favor of reducing drug disasters at the expense of overall access. The distinction between Type I errors (false positives) and Type II errors (false negatives) illustrates this point. A Type I error occurs when an agency approves a dangerous drug, and a Type II error occurs when the agency fails to approve a beneficial drug. Since regulators’ power relies on their reputation, their incentives are mostly aligned to avoid committing Type I errors because the media and consumer advocates can easily identify the victims of these errors. On the other hand, when regulators fail to approve a safe and beneficial drug it appears that the people who would have been treated or saved die from their diseases. The victims of approval delay go unnoticed. Therefore, regulators have incentives to minimize Type I errors even though a very risk-averse approval strategy is harmful to the public’s health on balance.

There is some empirical support for this diagnosis of pharmaceutical regulator’s intentions. Henry Miller, a former FDA employee, described a case where his team at the administration was prepared to approve an application for recombinant human insulin after only four months. Yet Miller’s supervisor refused to finalize the approval even though he agreed that the data supported the judgment that the drug was safe and effective, because “If anything goes wrong … think how bad it will look that we approved the drug so quickly.”40 The institutional context deterred consideration of the patients who could have benefited from faster access to the drug. Rather, his incentive was to adopt extreme caution out of fear that erroneously approving a dangerous drug would threaten the administration.

Even when pharmaceutical regulators approve a drug, if the public and media conclude that an approved drug is unacceptably risky, then approval could weaken the authority of public officials. For example, in the 1970s the FDA approved a vaccine for the swine flu that effectively treated thousands of patients and may have averted a swine flu pandemic. Yet the vaccine caused several hundred cases of death or paralysis from Guillain-Barre Syndrome.41 Media coverage of the Guillain-Barre deaths shook the public’s faith in the FDA and undermined the credibility of subsequent vaccine-based public health campaigns.42

(p.128) A third psychological barrier raises a deeper concern about approval requirements. One of the reasons that the current approval system is especially unjust is that it is not clearly effective at preventing dangerous drugs from reaching the market (since the introduction of efficacy requirements did not reduce the proportion of drug recalls). But regulators have an impossible assignment insofar as they are required to prevent all dangerous drugs from reaching the market. Many of the risks associated with new drugs stem from long-term use, drug interactions, or typical users’ errors, which are difficult to pick up from clinical trial data. And clinical trial data is only an imperfect guide to assessing efficacy; some drugs are approved as effective but subsequent studies discredit regulators’ earlier judgments.43 No matter how long a trial continues, it is impossible to catch all possible problems before a drug reaches the market.

In sum, policies that require regulators to prevent people from using dangerous drugs entrench, rather than correct, the public’s biases. Because citizens indirectly authorize regulators, their biases cause agency officials to make decisions in light of political considerations rather than purely on the basis of which policy would save the most lives. And since everyone is subject to prohibitive premarket requirements, even unbiased medical experts are subject to prohibitive policies that do not reflect medical expertise.

A certificatory system would also subject officials to reputational influences that could potentially effect which drugs are certified. But if patients and medical experts had access to a drug when they judged that a drug’s lack of certification reflected an overly cautious agency’s judgment, they could still purchase and use drugs in light of the relevant risks and benefits. A certification system would also partially relieve members of regulatory agencies from responsibility for drug disasters because they would not be charged with keeping dangerous drugs off the market. So while some of the psychological biases associated with an approval system would be repeated in a certification system, to the extent that they persist, citizens could at least avoid their deadly effects.

Another distinguishing feature of non-ideal theoretical arguments is that they consider whether policy reforms are feasible in light of social facts. Compare the political feasibility of reforming an approval process to minimize drug lag and drug loss with the feasibility of a certification system. If existing approval policies prompt regulators to be overly cautious because their authority rests on their agency’s reputation, then perhaps insulating regulatory agencies from public influence would solve this problem to an extent. One might think that (p.129) privatization could prevent regulators from acting on the basis of political pressure, but privatizing an agency like the FDA would not solve the problems associated with public influence insofar as officials authorize and empower a private agency to enforce approval requirements, just as they currently authorize public agencies to do so. Yet it is the enforcement and policing functions of the FDA that wrongfully causes patients’ deaths and distinguishes approval policies from an alternative certificatory system.

Alternatively, after public officials outside the FDA blocked the agency’s approval for Plan B, Dan Carpenter proposed that the FDA be reformed to act more like the Federal Reserve. He writes:

A cabinet secretary—and by extension, a president—has overruled a drug-approval decision by the Food and Drug Administration. The precedent risks placing the real power for drug approval not just with a cabinet secretary, but also with the White House itself. The only solution, then, is to make the F.D.A. truly independent. Americans have already done this, through the Federal Reserve, to protect our money supply from political meddling; it’s time to do it for drugs. … We would never allow this sort of second-guessing when it comes to our financial health. We should have the same standards when it comes to our public health.44

In other words, to enable the FDA to set pharmaceutical policies in ways that promote public health, Carpenter proposes that the agency’s power should be more fully insulated from its reputation. Reforming the FDA to be even further insulated from public influence would not only solve the problem of agency curbing, but it would also free the FDA to make policy that promotes the public health more generally. Without the possibility of agency curbing and the need to bow to public opinion, the FDA would no longer be required to play it safe by deciding strategically; rather, it could issue judgments that were solely made dependent on medical considerations.

If Carpenter’s proposal for reform successfully insulated the FDA from public pressure, then perhaps the agency could adopt a less deadly balance between Type I and Type II errors than the current policy. But such a proposal is unlikely to succeed because it would require a broad base of democratic support for ceding authority to a non-democratic institution. Carpenter’s proposal is to make the FDA more like the Federal Reserve, which regulates monetary policy in the United States. But the Federal Reserve is very unpopular, and recent legislative reforms have limited the power and independence of the Federal Reserve (p.130) and required more transparency.45 Some policymakers campaign to abolish the Federal Reserve entirely.

A fully independent agency to approve drugs is even less likely to win the necessary political support, considering that agencies like the Federal Reserve do not even prohibit particular individual choices. Also, when people disagree with the judgments of an agency like the Federal Reserve, the agency’s practical authority can still be justified by the fact that the nation needs some unified monetary policy to maintain economic growth. No similar public good is served by prohibitive pharmaceutical regulations though because an agency could certify drugs instead and allow patients to make their own choices about pharmaceuticals.

This is not to suggest that a certification system is likely to fare any better in light of the relevant political constraints. The previous discussion of the regulatory reversal test began with the premise that voters frequently favor the status quo without justification, and just as they would likely oppose making pharmaceutical regulators more powerful and independent, they would likely oppose a certificatory system as well. But to the extent that the merits of a proposal for reform depend on political feasibility, reforms in favor or a more just, efficient, and effective approval system face many of the same political hurdles as reforms in favor of a certificatory system.

A better approval system is also unlikely to be feasible in light of physical facts simply because of the nature of its mission. Approval requirements encounter the competing pulls of two goals: improving the effectiveness of drug screening to minimize the number of drug recalls and accidental poisonings, and helping patients access the drugs they need as quickly as possible. These two constraints reflect the physical constraints that challenge hypothetical proposals for reform. Clinical trials provide useful information about drug safety but take years to conduct and interpret, so it is not feasible to implement a system that speeds trials because some of the effects we are interested in don’t show up until later. In contrast, a certificatory system would resolve this tension by abandoning the goal of minimizing the number of drug recalls and dangerous drugs on the market and focusing on the informational function. In this way, it would be more feasible for a certification system to meet its goals than an approval system.

In addition to the feasibility of a proposal in light of psychological, social, and physical facts, non-ideal theory is also concerned with people’s potential non-compliance with a policy. People generally comply with existing approval (p.131) polices (I will discuss exceptions in the next chapter), so compliance is not a significant barrier to reforming an approval system. There is also little reason to think that officials would refuse to comply with a duty to certify drugs in the absence of approval requirements, since they currently comply with their duties to certify drugs.

On the other hand, one may be concerned that a compliance with a certification system would be less likely because manufacturers would be more likely to mislead patients about the nature of their products if they were permitted to sell drugs before the drugs were approved for particular conditions. There are two ways in which a patient may be misled about the nature of a drug. First, a manufacturer may provide her with insufficient information to judge a product, and she may make incorrect inferences about a drug in the absence of information. This failure would not necessarily constitute a failure to comply with a certification system, as long as patients were aware of a drug’s uncertified status. Insofar as an approval process is necessary to protect patients from making choices that are so misinformed as to constitute a failure of patients’ consent, then public officials should question patients’ ability to consent to the drug during clinical trials and to consent to use approved drugs off-label. Though there are legitimate reasons to be concerned about people’s ability to make judgments in the absence of information, as long as a person is aware of the potential for ignorance, she may take this form of uncertainty into account as she decides, just as she accounts for the uncertainty associated with taking a drug that has known risks.

Second, in the absence of approval requirements, one may worry that manufacturers would be more likely to commit fraud. If manufacturers could sell drugs without authorization, then a greater number of drugs for which there is no consensus about safety or efficacy would be available to the public.46 Without a consensus, it would be more difficult for consumers to ascertain the truth or falsity of marketing materials and drug labels and more difficult for people to establish that they were harmed by misleading information.

This is a legitimate concern about the feasibility of fraud regulations and likely non-compliance with fraud regulations under a certification system. However, proponents of a certification system have resources to address likely non-compliance short of enforcing deadly approval requirements. For example, (p.132) manufacturers should be liable for deceptive and fraudulent marketing under a certification system, just as they are currently responsible for the claims they make in their labeling and marketing materials. If laws that prohibit fraud were reliably enforced and if the penalties were substantial, then public officials could deter firms from making false or deceptive claims about their products even if firms were authorized to sell drugs without premarket approval. To the extent that holding manufacturers responsible for fraud is not a sufficient deterrent, officials could increase the penalties for fraud rather than preemptively requiring an entire industry to seek approval for every label and advertisement.

Pharmaceutical manufacturers, like firms in other industries, also have substantial incentives to seek certification to prove that they have quality products. Investors may reasonably call for certification as a condition of their continued support. This is not to say all firms would have decisive incentives to seek government certification. Already, private companies, such as hospitals, insurers, and managed care organizations, use their own standards to assess the safety and efficacy of approved drugs when they develop drug formularies. This practice of certifying drugs for use by specific hospital patients or insurance plan members would be even more valuable in the absence of approval requirements.

On the other hand, concerns about manufacturers’ non-compliance with fraud regulations may persist in light of the fact that some businesses have incentives to prioritize short-term profits over long-term safety and efficacy. Under a certification system, firms would be permitted to sell drugs on the basis of limited, short-term evidence as long as patients were aware of the lack of long-term studies. Under such a system, one may worry that firms would also lead patients to believe that their products did not carry long-term risks and that it would be difficult to seek damages on behalf of patients who were the victims of these false claims. The Vioxx recall illustrates this point: sometimes it’s difficult to identify the patients who are harmed through long-term use when the harmful long-term effects of a drug consist in an elevated risk of events that are likely to occur anyhow. Yet this particular worry about fraud and non-compliance applies with equal force to approval and certification requirements, since the concern is that manufacturers may fail to investigate long-term effects, which the approval process is poorly equipped to assess as well.

The final requirement of non-ideal theory is to consider that in some circumstances a policy is justified as the best available option, even if a better option would be available if concerns about feasibility and non-compliance were not in play. Yet in the case of pharmaceutical regulation, concerns about the psychological, social, and physical feasibility of effective reform undermine calls to reform the existing approval system more than they undermine the case for a certification system. And the concern that a certification system would potentially cause less compliance with fraud standards can be addressed by strengthening fraud protections rather than through an approval system.

(p.133) Those who nevertheless suspect that the effects of a certification system would be awful given the aforementioned non-ideal considerations should consider why they value the current approval system. Do people value approval requirements for their epistemic benefits (informing patients and providers about the nature of new products) or for their prohibitive effects (preventing people from using new products until they are better understood)? If they value the epistemic benefits of an approval system, a certification system can provide the same benefits. Those who also value the prohibitive effects of an approval system can continue to live their lives as if an approval system is still in place by refusing to use uncertified drugs. If all citizens valued the prohibitive effects of an approval system, then, under a certification system, citizens would universally comply with regulators’ recommendations, and patterns of pharmaceutical use would look the same as they do under an approval system.

To the extent that the effects of a certification system would differ from the effects of approval requirements, those differences would be explained by the fact that some citizens would choose not to defer to regulators’ judgments. In these circumstances, a certification system would allow patients to consent to use risky drugs whereas approval requirements would force all patients to comply with risky and deadly prohibitions. So while considerations of feasibility and compliance are relevant to non-ideal political theory, when we consider the effects of various proposals for pharmaceutical policy reform in non-ideal theory, even if the effects of various proposals are uncertain, we should also bear in mind the clear moral advantage of a certification system. Namely, whatever its failures in implementation, one effect of a certification system is that officials would not be empowered and encouraged to kill patients by withholding access to lifesaving drugs.

Turning to ideal theory, compare a regulatory system where citizens and public officials could overcome psychological and political barriers with a more effective and humane approval process to a certification system. A principled evaluation of an approval policy’s merits may proceed by assuming that officials could overcome their biases and fully comply with the law. But such an evaluation should then be compared with the principled evaluation of a certification system, which may proceed with the assumption that patients and providers are also capable of overcoming well-known cognitive biases and that manufacturers fully comply with anti-fraud legislation and good manufacturing practices. Through the lens of ideal theory, the primary reason to favor a certification system over approval requirements in principle is that approval requirements unnecessarily violate patients’ rights when officials could ensure safe drug use in other ways.

All approval delays are marked by the pro tanto wrongfulness of coercion because they are backed by legal penalties. If a person distributes an unapproved drug, then he is subject to legal penalties such as fines or jail time. These policies violate patients’ rights of self-medication. Since it is better if a person suffers (p.134) drug-related harms because she knowingly chose to use a dangerous drug than if she suffers because a regulator coercively prohibited her from using a potentially therapeutic drug, coercive approval requirements are pro tanto worse than a system that relies on coercive threats.47 Coercion can be justified if it is necessary to prevent wrongdoing or some other justified goal, but coercive approval policies are unnecessary because, as I argued in the previous sections, officials could achieve most of the goals of approval by providing patients and providers with incentives to self-medicate in accordance with recommendations instead of by enforcing prohibitions. For example, officials could pay citizens to comply with a certification agency’s recommendations, and insurance providers could provide incentives as well.

Critics of ideal theory may reject the usefulness of assessing policy in light of these assumptions. As Laura Valentini, writes, ideal theoretical arguments are often paradoxically offered as necessary for guiding us in thinking about what we should do and incapable of offering any concrete proposals, and it is difficult to strike a balance between sticking to one’s principles while considering the relevant facts.48 But it is nevertheless important to consider which principles are sensitive to psychological facts and political constraints, even if it is difficult to strike the right balance, because otherwise people may risk letting themselves off the hook for falling short of doing what they ought to do simply because people are not motivated to do it.49 In the face of non-ideal constraints, it is important to acknowledge that non-ideal proposals do not always reflect our values and that it is valuable in itself to know if a policy can be justified in principle even if the policy would never be achieved in practice. Thinking about which policy would be ideal can also clarify other values, and therefore provide theoretical resources for assessing existing policies.

So even if approval requirements were a decent pragmatic response to concerns about manufacturers’ failure to comply with anti-fraud legislation and patients’ ignorance about their health and the effects of drugs (a claim that I dispute in the previous section), they would not be justified in principle when compared with a certification system. All else equal, if it were feasible to either educate a patient or to force him or her to comply with medical recommendations, the educative approach would be morally better. This ideal has equal force in clinical and public health contexts.

(p.135) In summary, just as officials should not use civilians’ bodies and homes as human shields in the service of a just cause, especially when it is unnecessary, officials also should not violate patients’ rights to advance scientific knowledge or to discourage wrongdoing when there are other ways to accomplish these goals. So even if there would be some costs to rescinding prohibitive approval policies, these costs could not justify the rights violations that the existing requirements entail.

4.4 The Risks of an Approval System
Even if some approval requirements could be effectively enforced in ways that minimized the severity of rights violations and the loss of life that characterizes the existing system, enforcing premarket approval policies is extremely morally risky, whereas allowing patients to access unapproved drugs is not as morally risky. Second-order considerations of moral risk lend further support to the normative justification of a certification system.

In cases of life and death, the mere risk that an action could be wrong is a reason against doing it if that risk is substantial.50 First-order moral deliberation about whether killing is justified is insufficient to justify killing because just as a person can be culpably reckless in exposing others to an undue risk of harm, he can also be culpably reckless in deliberating about the wrongfulness of harming others. In both cases, people should be very cautious about killing or risking the lives of others. Dan Moller defends this principle by appealing to the fact that each of us has reason to believe that we are mistaken about moral facts, just as people in the past were mistaken about the ethics of slavery and warfare.51 And it is especially easy for someone to err in moral reasoning when he is deliberating things that are very complicated, such as things that involve large numbers of people, uncertainty, or probabilistic judgments.

This is not to say that people should never act in ways that could potentially kill or suggest that public officials should be paralyzed by extreme moral caution in all that they do. Officials should only take action that involves killing, such as the enforcement of approval polices, when there are compelling moral reasons in favor of it, taking one’s moral uncertainty and awareness of moral uncertainty into account.52 Moller suggests that the relevant reasons should account for the likelihood that an action involves wrongdoing, the severity of the wrongdoing, (p.136) the cost of not acting, the agent’s level of responsibility for wrongdoing if it turns out that it was wrong to act, and the potential wrongdoing of not acting.53 He therefore concludes that public officials should avoid enforcing policies that potentially violate rights even if they believe that it is justified. We might add that officials have especially strong reasons to avoid violating rights when it is unnecessary.

Turning to approval policies, public officials should consider the likelihood that delaying access to a drug is wrong even if they are skeptical of the foregoing arguments. If I am right about the wrongness of enforcing approval policies, then public officials kill patients, which is a severe wrongdoing. In contrast, it would be less costly or morally risky to refrain from prohibiting people from accessing unapproved drugs because then patients would have the opportunity to consent to the risks of a drug, which would mitigate officials’ responsibility for risky pharmaceutical use. But with approval requirements, officials are responsible for the deaths they cause by enforcing approval policies.

An analogy to just war theory further illustrates my claim that officials should refrain from enforcing approval policies for second-order reasons even if they reject my first-order arguments against approval policies. In warfare, there is a very high burden of justification for killing people who are not liable to be killed, such as civilians. This justificatory burden requires that the killing be both necessary and proportionate to the goods achieved by killing. Yet in the domestic context, public officials do not hold themselves to these same justificatory standards when enforcing policies that cause their own citizens to die. Even if one rejects my previous arguments that citizens have rights of self-medication and that approval requirements kill citizens, the enforcement of deadly approval policies still is neither necessary nor proportionate to the good it achieves since a certification system could achieve most of the benefits of preventing drug poisonings without causing people to die while waiting for approval. So even if regulators reject the foregoing arguments, if domestic policy officials held themselves to the same justificatory standards that soldiers apply to circumstances of warfare, then officials would find that the case for enforcing approval requirements is not strong enough to justify the lives lost.

Of course, it is not always simple for people to know that they are doing something that is morally risky, just as it is not always simple for people to know that they are acting wrongly. Some philosophers suggest that blameless factual and moral ignorance can be an excuse, so we should withhold assessments of moral culpability when people are ignorant of moral requirements.54 If regulators are (p.137) blamelessly ignorant of the moral wrongness of causing patients to die while waiting for drugs or of the moral wrongness of taking such a risk, assuming they discharged their epistemic duties, then they might be excused from blame for regulation.

This argument is controversial. One may object that blameless moral ignorance is not an excuse, even though blameless factual ignorance is exculpatory.55 If so, regulators are morally responsible for the deaths they cause through regulation even if they do not know that it is wrong to cause deaths in this way. Yet the actions of regulators are still wrong, even if they are personally excused from responsibility because they are blamelessly ignorant of the fact that it is wrong to kill patients by withholding access to therapeutic medicines or to take such a substantial risk. Consider how manufacturers are rightly held liable when they make misleading claims about dangerous drugs. Those manufacturers are liable because patients could not consent to use drugs without reliable information. Preventing patients from making voluntary choices about their treatment is wrong. Even if a manufacturer’s employees mistakenly thought they were not misleading patients, if a drug is fraudulently advertised or mislabeled, then the employees acted wrongly. Similarly, employees at regulatory agencies may not think they act wrongly, but they do insofar as their actions prevent patients from making treatment decisions.

In both cases, organizations that impermissibly violate patients’ rights of self-medication should be held liable for the harms they cause and for their lack of caution in morally risky circumstances. Though public officials who enforce approval requirements do not know the identities of those who are harmed, if they know that there is some probability that a portion of people waiting for access will be harmed by a lack of access, they are responsible for the harmful effects of their risky policy just as manufacturers are responsible for the harmful effects of risky drugs when the risks are not disclosed and patients could not consent to the risks. Today, manufacturers who fraudulently market drugs can be legally compelled to pay patients who were harmed. So too, the government should be legally required to compensate patients and their families when patients are harmed by coercive approval policies.

I do not doubt that the employees of regulatory agencies believe they are doing the right thing when they require approval for new drugs, but that belief is not sufficient to justify their lack of caution. Researchers and public officials who prevent manufacturers from selling investigational drugs to patients believe that they are protecting patients from making a harmful and dangerous choice but fail to consider the harms and dangers of their own choices.

(p.138) 4.5 The Risks of a Certification System
Do similar considerations about moral risk apply to manufactures that sell potentially harmful drugs? In the previous section, I argued that even those who reject the foregoing arguments against approval requirements should consider that the mere possibility that these arguments succeed is a reason for caution. But similarly, the mere possibility that the foregoing arguments do not succeed may be a reason for manufacturers to exercise caution in the provision of new drugs. It is morally risky to make pharmaceuticals. Manufacturers are not only uncertain about the risks of their products, but they may also be morally uncertain about the permissibility of selling dangerous products to consumers, especially in light of concerns about exploitation. These moral risks should inform manufacturers’ conduct. In some cases it may be wrong to provide patients with dangerous drugs or to exploit patients because doing so shows a lack of moral caution. But even if manufacturers should exercise moral caution when developing and selling drugs, the chance that they are acting wrongly does not necessarily tell in favor of policy interventions.

For manufacturers, two kinds of risks are worth considering. First, when manufacturers develop and sell new pharmaceuticals, they risk negligently selling dangerous drugs or misleading patients, even if they do not knowingly act wrongly. I have argued that if a manufacturer takes care to disclose all known and unknown risks, then it is not wrong to sell dangerous drugs. But this is a controversial statement. It may be wrong to sell drugs about which little is known even if that fact is disclosed. I have also defended the controversial claim that it is not wrong to sell investigational drugs to patients with terminal illnesses, but this claim is controversial too. Some philosophers argue that profiting from another’s person extreme need or depravation is wrong because it exploitative. These considerations may give manufacturers reason to be exceptionally cautious when providing people with access to investigational drugs, but, even if manufacturers do fail to exercise appropriate moral caution, it does not follow that public officials may permissibly prevent them from selling unapproved therapies.

The first argument against selling investigational drugs because of the moral risks concerns the risks of providing desperate people with products when there is limited information. The possibility that I am wrong about patients’ ability to consent to unknown risks of drugs may call for manufacturers to be exceptionally cautious about selling uncertified drugs. They may therefore choose to implement testing requirements for patients or to require authorization from a physician for patients to use their uncertified products. Though I have suggested that such a policy would be unnecessarily paternalistic and would express an offensive judgment about patients’ ability to choose their course of treatment, (p.139) insofar as manufacturers have the right to decline to sell their products at all, they would also be entitled to sell their products only under certain conditions out of a desire to be cautious and to avoid moral risks.

On the other hand, the potential moral risks of selling uncertified drugs would not license an approval system, even if manufacturers do have obligations to exercise caution in distributing their uncertified products. Public officials should exercise caution when considering interference because there are substantial moral risks to preemptively interfering with transactions between patients and manufacturers, even if manufacturers’ conduct is also morally risky. If manufacturers do have duties to take extra care to obtain consent when selling investigational drugs, then public officials may at most hold manufacturers liable for failures to disclose the required information.

The second argument against selling investigational drugs for reasons related to moral risk concerns the risks of exploitation. The argument goes like this: Exploitation consists in taking advantage of someone who is badly off. Exploitation is wrong. Selling potentially dangerous and ineffective investigational drugs to dying patients takes advantage of people who are badly off. So selling investigational drugs is wrong. I have suggested that this argument is unsound because exploitation is not immoral, since it does not violate a person’s rights when one simply gives her an additional option. The paradigm cases of exploitation are those where a vulnerable person’s only reasonable option benefits whoever is providing that option much more than it benefits the vulnerable person. But in these cases, whoever provides a vulnerable person with an additional option still benefits her more than anyone else does.56 So while desperate and dying patients are vulnerable and, by selling drugs to them, a manufacturer may be accused of profiting from their desperation, it is unclear why manufacturers act more wrongly than people who do nothing for sick patients. As Matt Zwolinski has argued, if it is permissible to decline to interact with people who are badly off, it is also permissible to give them an offer if by refusing that offer they would not be any worse off.57

But this argument could be false, and if it is, then manufacturers may wrongfully exploit patients by profiting from the sale of investigational therapies to desperate people. In light of this possibility, manufacturers may have reason to avoid charging exorbitant prices for drugs that are unlikely to work. Concerns about moral risk could theoretically support a more cautious approach to drug prices if it were possible to provide more affordable access without diminishing (p.140) incentives to develop new drugs. On the other hand, insofar as one accepts this argument, it would not apply uniquely to investigational therapies but also to all drugs that are marketed to sick and dying patients and other lifesaving products too. For example, if a patient is diagnosed with bacterial meningitis, he must purchase and use an expensive antibiotic to save his life. But whether the antibiotic is an investigational drug or not does not seem relevant when we ask if it is permissible for a manufacturer to sell it to him at a high price. If it were wrongly exploitative to target the worst off when selling drugs, then that argument would seemingly call for a cautious approach to the sale of approved drugs as well. In light of this consideration, it is clear that caution carries its own risks. Manufacturers’ voluntarily limiting the prices of lifesaving drugs could deter investment in therapeutics going forward. For this reason, considerations of moral risk may not call for a cautious approach to the sale of investigational drugs on balance even if there are pro tanto reasons to account for uncertainty about the wrongfulness of exploitation.

But even if it were wrong to risk exploiting patients by selling unapproved therapies, it still wouldn’t follow from that claim that manufacturers ought to be legally prohibited from providing investigational drugs to patients. This is because it would be more risky to legally limit the options of the worst off, even if it is also immoral in some way to provide those options. For example, paying low wages might be morally risky. To the extent that it is, potential sweatshop owners have reasons to raise wages in light of their moral uncertainty as long as high wages would not deter employment. But for similar reasons, public officials would also have reason to permit low sweatshop wages, in light of the moral risks associated with coercion.

Furthermore, those who would press the charge of exploitation against manufacturers who would sell unapproved drugs should consider that approval policies that limit patients’ options make exploitation more likely, not less. For example, patients are more likely to make imprudent, desperate decisions when more conventional treatments are unavailable.58 So allowing patients who have no other medical options rights to try investigational therapies could reduce allegations of exploitation on balance if it prevented alternative care centers and supplement manufacturers from profiting from vulnerable patients with few remaining choices.

(p.141) 4.6 Conclusion
In summary, I have argued that public officials wrongfully kill patients by withholding access to investigational drugs. Public officials’ belief that patients ought to be protected from the option to use dangerous drugs, even if some patients will die as a result of these protections, relies on the premise that it is morally worse for patients to be killed than to be allowed to die. I accept this distinction, but a second look at the case reveals that manufacturers do not kill patients by selling potentially dangerous investigational therapies because patients can consent to the risks of the treatment. And a third look reveals that well-intentioned employees at regulatory agencies are culpable for the deaths they cause when they prevent patients from choosing to use investigational therapies.

This killing cannot be excused by the need to test because approval requirements are not necessary to obtain information about new drugs. Nor can worries about manipulative marketing or the exploitation of vulnerable patients justify an approval system. Perhaps, regulators are blamelessly ignorant of the moral facts. This could be the case if regulators believe that drug lag is not morally wrong. Nevertheless, officials are also obligated to exercise caution when coercing people, especially in matters of life and death. So even if officials are not convinced of the wrongfulness of killing people by enforcing approval requirements, the mere risk of wrongful killing calls for a more cautious approach to regulation.

I have also developed a methodological argument in this chapter, which is that assessments of an approval system should be held to the same standards as assessments of a certification system. One may object to calls for a certification system on the grounds that reform is infeasible, but the same can be said of efforts at a more just approval system. Are we then to accept the deadly status quo? Perhaps the status quo approval system will persist, but that doesn’t mean it ought to. And as a matter of ideal theory, a certification system is clearly preferable because it avoids the wrongfulness of unnecessarily coercing patients and providers and killing people through approval delays.

Notes:

No comments: