Navigate this BlogHome
Idealog AlliesWork In Progress (Chris)
Tuesday, February 18, 2003
Before I can say some of the things I want to say on this blog, I want to talk a bit about my ethical system.
Religiously, if it can be called that, I am an agnostic defaulting to the atheistic side of the spectrum. While I haven't entirely closed the book on the existence of a God, I have little to no conception of what a deity would be like. I can't get any moral certainty from that position.
While I guess it would be possible for someone in my position to just totally abandon ethics and live a hedonistic lifestyle, that doesn't interest me. Instead, I've tried to develop some ethical guidelines that allow me to evaluate choices and choose the best one.
Most of these rules sort of assume the ability of me to evaluate the results of a choice on the universe as a whole. I'm neglecting the details of this in this post. This is important, though, and many disputes could arise if people evaluate the same situation and set of choices differently.
The being said, my main rule is very similar to that of pure utilitarianism. I believe that one should evaluate all choices based on the goodness of the expected end state of the universe, and choose an option that is close to the best.
Simply stated, one should chose an option that is one of the best choices, even if there are no good choices.
This is fairly complicated, so let me point out some of the points that could be misunderstood. It may seem that I'm saying that the end justifies the means. I think that this is true, but I use a much broader definition of ends that most critics of this approach do. I have to lump together both the intended ends and those ends that derive from the chosen means when evaluating the end state of the universe.
Some may criticize this approach in that there is nothing that I would not do if the situation is appropriate. This is true, but my gut instinct on this is that the situation would have to be really bad before I did anything truly heinous, and even then it would be the best option of a bad set of choices.
The second issue I have is the definition of the expected end state of the universe. A natural definition of expectation is the mathematical one (sum(prob(state)*goodness(state)). But I don't think that is completely appropriate here. I think that highly-probably outcomes should outweigh low-probability but very good outcomes. By mostly considering highly-probably outcomes, we can mitigate to some extent tragedies of the commons, which are very likely when there is a low-probability high-goodness state surrounded by high-probability low-goodness ones.
Another issue is that I specify to choose an option that is close to the best, rather than always necessarily choosing the best possible choice. There are two motivations behind this qualification. The first is to answer many criticisms of pure utilitarianism by introducing elements of rule utilitarianism and ethical cynicism. The idea here is that there is a little bit of slack to allow one to have integrity, while at the same time preventing one from causing great damage by sticking to principles even when those principles can lead to a very bad end state.
This answers some of the criticisms of pure utilitarianism (such as the one given by Den Beste above), especially with regard to things like keeping commitments. I propose that one should generally act with integrity, and that this commitments should only be violated when keeping them would severely impact the final state of the world.
As a concrete example of this, while I generally believe that people and organizations should keep their word, I would have no problem with nations choosing to abrogate peace treaties with horrible dictators, like Hitler or Saddam. This is something that a strict rule utilitarian with a rule like "Always keep one's word" might have trouble doing without violating his ethical system. (Not that I would expect this to stop anyone in practice.)
The second reason for not explicitly choosing the best possible option is that it allows one to act in most situations without much need for ethical deliberation. In most cases, the amount of impact a choice will make on the end state of the world is trivial. In those cases, there's no need for deliberation as to which choice will maximize goodness. If there's no obvious best choice, choosing any will do.
Finally, I don't believe remaining passive eliminates one's ethical responsibilities. I think that choosing not to act is an action in and of itself. A consequence of this is that rejecting a choice outright in favor of inaction doesn't really benefit anyone, unless inaction is the better choice or a new option is proposed.
Commenting has been suspended due to spam.