• This is a political forum that is non-biased/non-partisan and treats every person's position on topics equally. This debate forum is not aligned to any political party. In today's politics, many ideas are split between and even within all the political parties. Often we find ourselves agreeing on one platform but some topics break our mold. We are here to discuss them in a civil political debate. If this is your first visit to our political forums, be sure to check out the RULES. Registering for debate politics is necessary before posting. Register today to participate - it's free!

The morality of aggregate harm

Mithrae

DP Veteran
Joined
Sep 15, 2013
Messages
9,604
Reaction score
5,049
Location
Australia
Gender
Male
Political Leaning
Liberal
There's a moral question I've been thinking about on and off over the past six months, but I'm not sure most people would even acknowledge the principle it's predicated on. Presumably we all agree that something needs to be done about climate change, for example - nothing controversial about scientific facts, right? - but is there a personal moral obligation to reduce or limit one's own contributions to that aggregate harm? In itself, there's no positive or negative moral value to taking a flight; a thousand or a hundred thousand people could go on flights and there'd be no appreciable harm caused to anyone. But when it's billions of people taking flights every year that becomes a significant contributor to adverse effects on the quality of living and the very lives of hundreds of millions of people. It seems fairly straightforward that if a set of actions has a predictable morally-negative consequence A, and my actions contribute a fraction B towards that set of actions or consequences, then however tiny that fraction may be my actions are immoral on the scale of AxB. If a billion people are likely to die and my contribution was one part in ten billion, then my actions are in the scale of one-tenth as bad as a negligent homicide or an intentional homicide (depending on my level of knowledge, callousness and "screw the liberals" malignancy in taking flights and so on).

I suppose the obvious counter argument would be "If taking a flight is otherwise morally acceptable, how can other people's also-acceptable actions make it suddenly immoral? How can a billion rights make a wrong?" But that seems quite shallow because the fact is that's the format of almost all our moral decisions; shooting a gun at a location is perfectly moral and standing in a location is perfectly moral, but shooting a gun at a location when someone else has decided to stand there is immoral. On the other hand the nuance that might legitimately be introduced is that since in cases of purely aggregate harm like carbon emissions there almost by definition must be some safe level of behaviour below which there's no appreciable harm, if my actions were of a type or scale which would be morally acceptable even in aggregate, they might still be acceptable even when others' actions exceed that scale and thus do produce harm. I think of it as a more practical reformulation of Kant's categorical imperative - act only in such a way that you can will that all others should do likewise - applicable only in these cases of aggregate harm: If it is a morally-driven desire that everyone's personal carbon footprint be X or lower, then it must be moral to include myself as one who can and should have a carbon footprint of X or lower, even if that positive value contributes to the aggregate harm of others' higher footprints. The fact is that for carbon footprints and many other cases of aggregate harm, it's near impossible to not contribute towards that aggregate harm (our bodies literally turn oxygen into carbon dioxide) so the only real question is whether the moral yardstick should be an extreme and impractical 'minimum' level of contribution or simply a sustainable level of contribution.

Is everyone with me so far? I'm talking mostly about climate change but obviously there are other examples of aggregate harm; meat-eating being probably the next most widely-recognized (potentially in about the same moral ballpark as anti-vaxxing even considering contemporary human impacts alone), and ultimately almost any kind of material consumption or waste due to their contributions to resource depletion, pollution and habitat destruction. It seems clear to me that unsustainable levels of contribution towards aggregate harm must be immoral... and yet I have to wonder whether I belong to a tiny minority in that view since it seems that even folk who profess to care about other people, animals and the environment mostly seem to have driving, eating and consumption habits more or less indistinguishable from anyone else in our wildly unsustainable societies.


Now with that baseline, the question I've been mulling over is how (if at all) does the moral status of unsustainable contributions towards aggregate harm change when that or a similar harm becomes inevitable and avoidance efforts become pointless?

An extreme example would be discovering that an asteroid is going to obliterate human civilization in a few months; would it still be immoral to fly a private jet or own a superyacht after that discovery? Is there any point at which it becomes morally legitimate to decide that it's all screwed up anyway so there's no point in worrying about negligible 'contributions' to the inevitable? And if so how would we determine in less extreme scenarios where that point is?
 
The premise of politics is that aggregate decisions need to be made in the aggregate somehow. Flights may be a bad example because there is so much room for argument about exactly what happens and what the limiting mechanisms might be. To take a simpler case, if there are 100,000 passenger pigeons left, a person might think, what's the point of being an environmentalist? If I don't shoot one, somebody else will! One person's act won't make a difference if some deal is reached to save them, and it won't make a difference if it isn't.

I'm not sure that this is actually a thing - positive law seems greatly overestimated, and I'm glossing over the details of how the "deal" is imposed on people. Question: is it possible to analyze a situation like this without deeply involving consideration of the role and specifics of violent action (whether state-sponsored or otherwise)?
 
Generally, I consider decisions made in full knowledge of aggregate harm or indirect harm to be just as immoral as direct harm. For example, if an insurance executive makes a decision that will knowingly cause the death of thousands, then that person has that blood on their hands. If a politician reduces spending that will cause the death of or harm to a group of individuals then they are responsible for that moral failure as well. Where it gets hard is how to craft laws that follow that ethical responsibility that we have. This gets even harder if we have a large group of people who only want to look at direct harms and ignore aggregate ones out of their own self-centeredness.
 
There's a moral question I've been thinking about....
Your post reminds me of a paper written by Peter Singer (link to a YT video) that proposed that many common activities and expenditures, such as buying luxury goods or eating at expensive restaurants, are morally wrong if the money spent on them could have been used to save lives or alleviate extreme suffering. I think it could easily be applied to the idea you are proposing.

The problem is, even if Singer is right it is simply beyond human moral comprehension (not logical) to consider the lives of so many people simultaneously. You may know that the reason that commercials that ask for your assistance helping with starving people only show one or two people, because most people can comprehend that, but show them 1000 starving people and he moral compunction to help doesn't go up, it goes down!

Which is why people tend not to act on a thing unless they can be convinced that the thing will directly affect them, or they can see how it affects them directly, or they are convinced their contribution to stopping it can help.

I think another factor is chronic change vs acute change. Sea rise of 6" over 30 years (hypothetically) is impossible for anyone to see, but overnight, people will notice. I think that people resist having to give up the things they want and if climate change asks that a person give up something now for an effect they cannot measure, they will rationalize it.

Now with that baseline, the question I've been mulling over is how (if at all) does the moral status of unsustainable contributions towards aggregate harm change when that or a similar harm becomes inevitable and avoidance efforts become pointless?
I think it's worth noting that I make a distinction between morals and morality. Morals are the rules that people believe that they should follow, where morality is the system of morals that are commonly accepted and enforced. Think about it like the rules football. Each player might have ideas about what rules should be changed, added or deleted, but when they play, they all agree to play by the shared set of rules. However, every person would benefit by ignoring the rules, but by and large most people agree to follow the rules, even though it's in each persons best interest not to do it, because if everyone cheats, the structure of the entire system fails, which makes everyone worse off.


Think about how hard it is to create a set of rules that applies to a large group. Now if you have that group in a room, and everyone in the room sees everyone else agreement to the rules, they are more likely to agree. But getting people to agree to rules across states or even countries? That will take time. The Geneva Convention is an example, and so is the Paris Climate Accord.
 
Yes, for sure. I think most people would agree that the moral obligation to refrain from harm or refrain from evil is weightier than the moral obligation to render help or to actively do something good. Whether I watch a trapped kid drowning in a pool or I myself actively hold him under the water 'til he dies the practical outcome is the same, but I imagine most folk would agree that the active murder is at least a bit more reprehensible than the failure to render assistance. Singer's angle of an obligation to render help to the poor in an abstract sense is essentially arguing an aspect of Jesus' morality, derived from his understanding of "love your neighbour as you love yourself." For example as said by Luke's John the Baptist to the crowds who asked his guidance, "Anyone who has two shirts should share with the one who has none, and anyone who has food should do the same," which ultimately led to Jesus' followers going around with little more than the clothes on their backs or, in the watered-down version of Acts, communal modest living and sharing with all who had need.

I think we'd all agree that there is some obligation to render help to others, such as a drowning kid, but does the obligation really extend to that extreme? I'd be surprised if Peter Singer lived quite that minimalistically, though I've been in contact with a sect of Christians who have. But the case for some degree of economic minimalism can be turned into an obligation to refrain from harm rather than 'merely' render help by noting that we live on a finite planet with finite resources, and even if the global economic pie is growing over time it's still doing so at a finite rate: At any given point in time, for me to have a significantly bigger slice of the pie will generally mean that someone else will have a smaller slice... and we obviously have a pretty good idea where in the world and which parts of developed societies the extra room for all the bigger slices of pie has come from! Seems to me that implies that folk with more than say 200% of global average wealth (currently ~US$170,000 per adult; debatable whether ownership of a modest personal/family dwelling should be included in that) can be said to be actively sustaining and engaged in the harm of other people's impoverishment and have a moral obligation to refrain from that harm by reducing their economic self-service, while even folk with say 60-90% of the global average (~$50,000 to $80,000 per adult) should aim to be happy where they're at and avoid energetically increasing their share. The downside of that approach is it doesn't really account for folk who spend their money without increasing wealth, for example on holidays or rapidly-depreciating goods, though doubtless a similar argument can be made regarding income. The upsides are firstly that it implies a stronger moral obligation to avoid causing harm rather than Singer's obligation to render help - albeit with a more indirect or abstract harm than than the help of famine relief or other charity - and perhaps more importantly and practically provides relatively clear and attainable benchmarks to avoid the outcomes of either fuzziness or extremism which result from Singer's argument.

Not a lot to add to this right now, but agreed, those are definitely real issues! A third point (kind of a counterpoint to everything I wrote above) is that telling people that they are doing wrong runs a good chance of getting their hackles up, getting defensive... so perhaps encouraging them to do good is marginally, marginally more effective. Abstract principle and facts are usually the most compelling thing for me, but I'm certainly in the minority there
 
Last edited:

The premise obviously assumes some brand of Utilitarianism. And it runs afoul of the fundamental problem with all Utilitarianism. Utilitarianism among the nonomniscient must have limited scope. The trolly problem is straightforward for the Utilitarian who assumes uniform utility among all humans, but if the trolly is going to run over two people who are on their way to kill a dozen students in a school shooting, and you can switch tracks to run over one surgeon on his way to perform several life-saving operations, that changes the calculations a bit. But what if one of students would have been the next Hitler? What if one of the patients would have been the next Hitler?

Utilitarianism always has to be scoped to the predictability of an outcome. If all air travel stopped, it seems unlikely that the ensuing collapse of the world's economy would actually be a net benefit (though you never know) so it's hard to say that every person on every flight is contributing to net harm rather than to net benefit. Calculating whether an individual person on an individual flight is contributing or detracting from overall utility seems like it would be a bit complicated to calculate, and not nearly predictable enough to make any kind of reliable utilitarian judgement on.
 
I'd be surprised if Peter Singer lived quite that minimalistically
My understanding is that he did.

Utilitarianism always has to be scoped to the predictability of an outcome.
I think what you're talking about can better be referred to as Consequentialism, which is what "outcomes" are based on. Utility is more about pragmatic intent. And "scoped to predictability of an outcome" could be better defined as knowledge.

Utilitarianism in its simplest form could be defined as; If we do "X" the sum of it's effect will be some positive, then it is worth doing. I don't think that Utility necessarily needs to be limited this way, just that in it's simplest form this is what it looks like.

What you're saying is that the action "X" can only be understood in the light of predictability, i.e the consequences, but I don't think there's any consequential requirement to utility, nor do I think that omniscience is necessary to make moral decisions. The practical usefulness of utility could be measured that way, but then, so can every other moral theory, which is of course what Consequentialism does, but not everyone, including myself subscribes to the idea that an action in the moment is good only if the outcome it produces is good.

However, if that's the case then actions that are bad, that result in good are good?

Or have I misunderstood?


If all air travel stopped, it seems unlikely that the ensuing collapse of the world's economy would actually be a net benefit (though you never know)
Exactly, which is why no one would ever consider such an act in those terms, because air travel like many other human endeavors is a mix of good results and bad results the complexity of which is impossible to calculate. But, we can take a high level view. If air travel is perceived to create more positives than negatives (to the extent that question can be known (knowledge), then it's sum is considered positive.

To know=knowledge. At the turn of the 19th century it was common for factories to pump toxic pollution into the air that people breathed and made them sick or killed them, but it was a lack of knowledge that led to the perception that it was 'good' to burn coal (or whatever) close to where people might breath it because the outcome was understood mostly in the context of benefits not it's drawbacks.

From an omniscient point of view, that kind of pollution was always bad, but from a human centric point of view it wasn't because there was a time when people didn't realize the harm and could only see the benefit because they lacked knowledge.

That said, I admit that there is an intangible mixed in which is, moral judgements are always relative to axiomatic assumptions. Meaning that if our assumptions about what constitutes a good or bad outcome differ, then our perceptions about good and bad, right and wrong might be different.
 
What you're saying is that the action "X" can only be understood in the light of predictability, i.e the consequences, but I don't think there's any consequential requirement to utility,

As you just said:


Utilitarianism in its simplest form could be defined as; If we do "X" the sum of it's effect will be some positive, then it is worth doing.

Which is to say that the worthiness of doing X should be evaluated according to the net consequences of X. Evaluating the utility of X is then entirely depended on knowing the consequences of X.


The practical usefulness of utility could be measured that way, but then, so can every other moral theory,

How can deontology be measured that way?


but not everyone, including myself subscribes to the idea that an action in the moment is good only if the outcome it produces is good.

That is, however, the assumed premise of the post:


That is pretty explicitly an assumption of utilitarian consequentialism.


Exactly. So for humans who were aware that they did not know the net consequences, how could they evaluate whether it was good or bad based on the net consequences? They couldn't. Utilitarian morality can never be solved for because the net consequences can never be solved for.
 
Thankyou for the reminder; I'd written a response to this at the time and then it got eaten so I ragequit. Pretty sure I read somewhere that this same argument was made in defence of slavery; if all the slaves were instantly freed, the economic chaos and retributive violence would surely result in far more deaths than there are slaves, so the least harmful option must be to keep upholding slavery. It was just as silly back then, too. Even in political terms it would at most be an argument in favour of this novel thing called having a plan, having a transition path between where you're at and where you ought to be. More to the point I'm talking about individual morality; it'd certainly be nice if public policy shared similar objectives, but the main question in my OP is based on the fact that with the recent fascist victory and likely end of American democracy that's obviously not going to be the case. There is literally zero risk that all or half or even a tenth of all air travel will disappear overnight out of a sudden wave of moral enthusiasm. We could similarly panic over what might happen if everyone gained the ability to kill someone just by wishing it: But back in the real world, the only harm that's in view is from climate breakdown, not from economic collapse caused by a sudden drop in air travel.

Yes, we make our decisions with the information available to us. Surely you're not trying to suggest that there is any kind of perfection or omniscience or divine revelation behind some other approach to morality? I've never seen a deontic moral principle for which it's not necessary to carve out some major exceptions, and the reason it's necessary to do so is because of the consequences if we don't. "Don't do to others what they don't want done to them" - unless they're engaged in hurting someone, or they're a child, or they're a libertarian whinging about taxation. "Everyone has the right to life, liberty and security of person" - unless they're engaged in hurting someone, or they're a child (at least for liberty), or a ground invasion of the main islands would cost millions of Japanese and American lives. Duty-based and virtue-based approaches to morality certainly have their place to as simplifiers, gap-fillers and habit-formers, but it's difficult to even imagine a non-religious argument that the end in view for moral thinking and activity is ultimately based on the outcomes. I guess maybe the argument that it just feels good to think of ourselves as good people... but even that is really a kind of consequentialism, as are many religious views oriented towards heaven or anatta or nirvana if it comes to it
 

Exactly. That is a utilitarian argument, and if you accept utilitarianism as a worthwhile ethical philosophy, you can't just say "slavery is wrong, period" like a deontologist can. You have to consider how many people would be affected negatively by the economic impact of ending slavery. Is it more people than are negatively affected by the continuation of slavery? These are questions for the utilitarian to grapple with.


Even in political terms it would at most be an argument in favour of this novel thing called having a plan, having a transition path between where you're at and where you ought to be.

I'm not convinced that a plan to eliminate air travel altogether is a good thing to make.



I am talking about individual morality as well. I don't think there is any actual danger of all air travel disappearing overnight either. My point is that supposing the outcome where there are over 8 billion people on a planet with no air travel to facilitate meeting all their needs is actually a negative consequence A, then if you contribute a fraction B toward that negative outcome, your actions are immoral on a scale of A*B according to utilitarian math.


Yes, we make our decisions with the information available to us. Surely you're not trying to suggest that there is any kind of perfection or omniscience or divine revelation behind some other approach to morality?

Of course not. But we cannot evaluate the net utility of a an action with our limited information.

I've never seen a deontic moral principle for which it's not necessary to carve out some major exceptions, and the reason it's necessary to do so is because of the consequences if we don't.

It isn't because of the ultimate consequences, the net consequences, the long-term consequences, or the aggregate consequences though. It is because of the immediate narrative consequences.

To take the classic argument against deontology, suppose Nazis come to the door and ask a deontologist Debbie whether she is hiding any Jews in her basement. If she has a deontological principal that telling the truth is good, then she has to tell the truth. But the immediate narrative consequence is that those innocent Jews in her basement will be killed horribly. Fundamentally, we don't admire Debbie because telling the Nazi about the Jews in the basement is not very protagonisty of her.

But suppose we had come from the future in our DeLorean and knew that this was the point in time where everything had gone horribly wrong. The Jews in the basement will build a Skynet to defeat the Nazis, but then the Skynet turns evil and wipes out most of humanity and subjugates the rest.

It still wouldn't make telling the Nazis about the Jews in the basement a compelling example of nobility, even though it involves both deontologically noble truth-telling, and utilitarian nobility in terms of minimizing aggregate harm.
 
Which is to say that the worthiness of doing X should be evaluated according to the net consequences of X. Evaluating the utility of X is then entirely depended on knowing the consequences of X.
I apologize because I think I misunderstood you. Totally my fault. I try to be objective without preconceptions, but I realize I came into this with some, and that led me to pursue a response that likely felt like I was ignoring you.

So if you will humor me, I'll share with you my thoughts and allow you the opportunity to evaluate my ideas as they relate to our discussion.

I see good and bad things about both utility and deontology. You're right that a pure deontologist wouldn't judge a single act by its outcome. But we can absolutely judge the rule by its consequences by asking: “What would be the overall outcome for society if everyone followed this specific rule?” The moment we ask that, we are applying a consequentialist lens to a deontological framework.

So what is the solution? A synthesis of both: Universal Utilitarianism.

My goal is to build a moral theory that strives to be:
  • Absolute: Its core principle admits no exceptions.
  • Universal: It applies to all people at all times.
  • Objective: Its decisions are not based on subjective whims.
The system begins with a single value judgment, or axiom, because facts alone can't provide one: People ought to be happy.

However, while facts can't prove that axiom, they serve as the indispensable foundation for all moral reasoning that follows. This leads to the core method of my system, which is a moral equivalent of the scientific method.

It has long been known that no catalog of facts about the world, no matter how complete, can ever by itself furnish us with a moral system. There must also be some decision made of what to value which can never be derived from mere knowledge of those facts. But while it is true that moral directives cannot be derived from the bare facts of the external world, they are still based on those facts, and therein lies the key. In other words, descriptive statements cannot confirm prescriptive statements, but they can disprove them. Any ethical directive based on a false premise is wrong. Ethical directives based on claims of fact that are not known to be false, but that lack sufficient evidentiary support, should be held only for consideration until that claim is either decisively confirmed or decisively refuted (falsification).

Once a moral directive has passed this factual test, we then apply the core utilitarian principle: Always attempt to minimize actual and potential harm, pain, and suffering; always attempt to maximize actual and potential happiness, health, and well-being.

This solves the problem of omniscience by accepting that we must judge actions not on their ultimate, unknowable outcomes, but on their foreseeable outcomes based on reasonable knowledge. I'm reminded of the Oracle in The Matrix: "we cannot see past the choices we do not understand." It also addresses many of the challenges of situational ethics that so many moral systems get tripped up on.

The "rightness" of an action is grounded in whether it was taken with the reasonable expectation of a positive consequence. This shifts the focus from an impossible demand for correct results to a practical demand for sound intent and reasoning. It makes knowledge-seeking not just an intellectual pursuit, but a core moral duty.

Lastly, accepting that people ought to be happy means accepting that the path there is never a straight line. Like science, morality becomes a process of evaluating our choices and applying lessoned learned, simultaneously accepting that perfection isn't possible, but improvement is expected.
 
Yes.

Problem solved.


Unless you're flying private twice a week, or being a total asshole and deliberately burning a bunch of coal because you think that's gonna pwn teh libz, there is a limit to an individual's personal effect on the environment. Even if every single person in the US chose to take mass transit whenever possible, turn off lights not in use, weather-proof their homes etc., the reality is that most of the work must be done on a societal level.

Individuals can't build their own power plants, or grow all their own food, or build highly efficient fridges, and so on.

So yes, individuals should do their part, but most of the heavy lifting needs to be done by government and institutions. They're the ones who need to build sustainable energy generation, fix the grid, improve public transportation, develop cleaner air travel, tame fossil fuel companies, and so on.


Another aspect is promotion of values vs adherence to values. In theory, you could make an argument that it's more important to convince others to be environmentalists, and support green policies, than to live a green life. Again, that's because the damage one person can do, and ability of others to inspect their lives, is usually limited.

I see little reason to pay much attention to allegations of hypocrisy, as they are almost always made in bad faith. E.g. Greta Thunberg takes extreme steps to reduce her individual carbon footprint, and the deniers launch all sorts of ugly ad hominem attacks on her anyway.

That said, I do think that if you're promoting environmentalism, you ought to make a good faith effort to reduce your carbon footprint, because it's the right thing to do. I don't think that proclaiming "I'm green" gives you a hall pass to muck up the environment at will.


In contrast, something like "consumption of meat" is far more reliant on individuals. Marketing notwithstanding, consumer choices drive the market, and have a lot more influence on the whole. If I want to get from Point A to Point B, my choices are going to be constrained by societal options. With food, for most (but not all) Americans, you should already have sufficient options to dramatically reduce or completely avoid eating meat.


For vaccination... Ugh. Yes, everyone has a moral obligation to get vaccinated, as long as you aren't allergic or the vaccine doesn't cause some other major personal issue. This is because...

- Diseases don't give a shit about your politics or conspiracy theories. If you're not vaccinated, even if you personally don't get ill, you can easily spread it to vulnerable people.
- Getting vaccinated not only makes you safer, it also reduces the cost of medical care you need if you get sick. That also reduces strain on medical resources, especially with serious illnesses and/or pandemic type events.
- Not everyone is able to get vaccinated, and many diseases can be thwarted via herd immunity.

IMO the arguments against vaccination are not based in either science or ethics. It's American Hubris, and selective emphasis of individual over society, gone mad.


As should be clear by now, there is no one-size-fits-all answer. You need to derive general principles, and figure out how to apply them to varying situations.


An extreme example would be discovering that an asteroid is going to obliterate human civilization in a few months; would it still be immoral to fly a private jet or own a superyacht after that discovery?
Yeah, I don't really see this as a useful hypothetical situation. If an asteroid is going to obliterate the planet, I have a feeling that society will break down so fast that no one will be able to fly anyway.


Is there any point at which it becomes morally legitimate to decide that it's all screwed up anyway so there's no point in worrying about negligible 'contributions' to the inevitable?
Not really, certainly not in any conditions we'd normally experience.
 
More to the point I'm talking about individual morality
I'd like to add a thought about this idea and hope that it makes sense.

Right now I think we lack the proper words to define what I'm about to say, so if you think I'm misusing the definitions, it's on purpose.

That said, I don't think the term "individual morality" is very useful and frankly is more of a contradiction and I think it leads to more confusion than clarity and I think clarifying the meaning of the words will help people better understand. There is no such thing (or shouldn't be) a concept of individual morality, rather individuals consider their personal conception of morals and moral systems (as I've done above). Morality is to morals as a teammate is to a team. In other words, morality should be considered a social practice. When two or more people agree on a moral premise, they are agreeing on their individual conceptions of what is or is not moral and form an agreement we call morality.

The reason this is important is that it counters the idea that an objective moral system must be external to human conceptions of it. The idea that if each person can create their own morality must inexorably result in every person being free to act on what is in their personal best interest, unless our system of morals and morality is external to us (god or some such).

But this idea the people are incapable of forming solid social systems is easily proven false.

Think of any system of rules. Take systems of rules in professional sports, where the incentives to cheat, where millions of dollars in contracts and endorsements not to mention the accolades that accompany winning, are always on the line. Every single player can have their own ideas about what the rules should be contrary to the rules as they are presented. The individuals conception of what they think the rules should be is personal, but the rules as practiced is social. Every single person in every professional sport would be better off if they only needed to follow their own conception of the rules, yet the vast majority of players will agree to and abide by the system that is socialized among them despite the fact they would be better off if they ignored the socially agreed system and did what they wanted.

Which is to say that humans are fully capable of generating an objective system from a collection of subjective inputs through agreement and socialization.
 
Any ethical directive based on a false premise is wrong.

I disagree. For example, if you have an ethical directive not to kick puppies because it will anger Thor, the directive not to kick puppies isn't necessarily wrong, just because the premise that it will anger Thor is false.


Those look like six different principles to me, and some of them are mutually exclusive.

If you ever love someone, then one of you will one day die, and the other will experience pain and suffering. Does that mean that it is immoral to love because it will lead to pain and suffering?

I suppose you could optimize for best value in terms of trying to get as many units of happiness as you can per unit of pain. Or you could just optimize for minimal pain, and eschew any amount of happiness toward that goal. Or you could optimize for maximal happiness, whatever the cost in terms of pain. I don't see that you have really addressed why one should be preferred over another.

More to the point though, if you could increase the happiness of ten million white people by 350 happiness units apiece by enslaving 7 black people at a cost of 4,575 units of suffering apiece, is it moral to enslave those 7 black people? Don't forget to show your work.
 
Well said.
 
I disagree. For example, if you have an ethical directive not to kick puppies because it will anger Thor, the directive not to kick puppies isn't necessarily wrong, just because the premise that it will anger Thor is false.
You're right that the action of not kicking puppies is morally correct. My principle is that the justification for that action is wrong. The full directive 'Don't kick puppies because of Thor' is a bad piece of moral reasoning. The directive is right, but not because the premise is true.

Let's show the absurdity of your argument.
A moral directive based on a false premise is structurally unsound and therefore wrong, even if it occasionally points to the right action. The problem with the 'because of Thor' premise is its unreliability.
What if a new prophet comes along tomorrow and says, 'Good news! I've just received word that Thor is no longer angered by puppy-kicking; in fact, he finds it hilarious!'?

According to the logic of the original directive as you've (mis)interpreted it, it would now be morally permissible, or even encouraged, to kick puppies. The directive flips because the false premise it rests on is arbitrary.

A sound directive, like 'Don't kick puppies because it causes unnecessary suffering to a sentient creature,' is based on a true, verifiable premise. It is stable and reliable. My principle aims to build a moral system on that kind of solid ground, not on the shifting sands of false beliefs or worse, bad faith reasoning meant to justify immoral actions hiding behind rhetorically confusing examples (for those that do not regularly ponder moral philosophy).
Those look like six different principles to me, and some of them are mutually exclusive......
The principle is not a menu where one must choose between four competing options. The four clauses are intertwined aspects of a single goal: creating the best possible state of affairs for everyone to the extent that it's reasonably possible.

The question of love is not about avoiding pain at all costs. It's about evaluating the total impact of an experience on a life. A life without love, chosen to avoid the eventual pain of grief, is a life stripped of immense potential for happiness, connection, health, and well-being. The suffering of loss is profound, but it does not erase or negate the decades of joy that preceded it. I've lost both of my parents, but I would never, if it were possible, erase the pain of their loss if I were also forced to lose all of the good memories, in fact, in love, as well as in life, the inevitable pain gives necessary context and value to happiness far beyond any simple quantification of love or loss as quantifiable.

That said I think your counter argument can easily be dismissed based on a well known quote that I think most people would agree with.

"It is better to have loved and lost, then to have never loved at all". - Alfred, Lord Tennyson

On your slavery example, the answer is brutally simple.

No one would want to live in a society where they could belong to a category of people that would be sacrificed for the happiness of others only on the justification that more people would be happy than unhappy as a result. The collective suffering of the anxiety of such a place would far out weigh any positives. This system of sacrifices would create fear, social mistrust and result in the moral corruption of society. That's not minimizing harm, it's exactly the opposite.

Now you might counter that a person/s who believes they will never be the one sacrificed or live in bliss never worrying they might be is a possibility, and you're right, someone could feel that way, but my moral system doesn't rely on and persons privileged subjective perceptions.

I would counter by asking that if we created a veil of ignorance and asked people how they feel about enslaving a few people so a lot of people could be happy not knowing if they'd be the slave or master, given the phycological fact of loss aversion I just spoke of, I think it axiomatically true that people would recognize the problem and realize that the overwhelming chance of being happier would not outweigh the chances of that kind of misery and suffering.

Lastly, everything I've outlined is in stark contrast to these boilerplate situational ethical scenarios. There's nothing in what I've said that would be consistent with these ideas, though I will freely grant that the word "potential" in the statement ...actual or potential suffering... is doing a lot of heavy lifting. But all moral theories have the same problem, they all suffer from some flaws and weaknesses, the idea is to minimize them to the extent that it possible.
 

That wasn't the logic you presented though. What you presented was that we can evaluate directive A as being a wrong directive if it is justified by untrue premise B.

If B is a true premise then A is a good directive. Premise B is untrue, therefore directive A is not a good directive. That is the logical fallacy of denying the denying the antecedent. A prescriptive statement cannot be disproven by disproving any particular justification for it, which is exactly what you seemed to be claiming when you said:


In other words, descriptive statements cannot confirm prescriptive statements, but they can disprove them. Any ethical directive based on a false premise is wrong.



The principle is not a menu where one must choose between four competing options. The four clauses are intertwined aspects of a single goal: creating the best possible state of affairs for everyone to the extent that it's reasonably possible.

You do though. Avoiding pain is often at odds with maximizing happiness. Sometimes you have to accept pain as the cost of happiness, self-actualization or other forms of fulfillment. If you want an Olympic gold medal, avoiding pain isn't really for you. You have to choose to accept pain as the cost of achieving your ambition. Which isn't necessarily an immoral choice.



That said I think your counter argument can easily be dismissed based on a well known quote that I think most people would agree with.

"It is better to have loved and lost, then to have never loved at all". - Alfred, Lord Tennyson


That doesn't dismiss my counter argument, that dismisses your argument for minimizing pain and suffering. Lord Tennyson is clearly saying that optimizing for minimal pain and suffering is not the best policy. There are better things to optimize for. And as you astutely recognize, most folks tend to agree.



That's a nice sentiment, but let's just suppose, hypothetically, that folks aren't as enlightened and fairness-minded as you like to think they are.

Let's suppose for a moment that there are 10 million white people who genuinely have no anxiety or fear that they could ever belong to the category of people that would be sacrificed for the happiness of others. They wake up each morning with genuine excitement, counting down the days till the gladiatorial deathmatch where they get to watch the slaves fight to the death on pay-per-view.

Hypothetically, if enslaving 7 people and forcing them to fight to the death actually brought a lot of happiness to 10 million people, or 100 million, or a billion, would it be morally right to enslave them and force them to fight to the death on pay-per-view?
 
You are correct that Denying the Antecedent is a formal fallacy. However, that fallacy would only apply if I were making the argument you've laid out, which I am not, though I do appreciate you pointing out the potential ambiguity of the word 'wrong'. I will restate it to avoid any misunderstanding.

So let me clarify.

The misunderstanding hinges on the word 'wrong.' When I say that a 'directive based on a false premise is wrong,' I am not making a claim about the moral status of the prescribed action. I am making a claim about the logical soundness and reliability of the entire moral argument (Premise + Directive). I have already agreed that the action of not kicking puppies is correct.

And now the clarification....

To be more precise, my principle is: Any moral argument is unsound if its justification is based on a false premise. The argument 'Don't kick puppies because it will anger Thor' is an unsound argument. The conclusion happens to be good by coincidence, but the reasoning is invalid.

Let's try this.....

Consider the mathematical statement: '2 + 2 = 4 because the sky is blue.' Is the conclusion ('4') wrong? No. But is the entire statement, as a piece of mathematical reasoning, wrong and unsound? Absolutely. My principle treats moral reasoning the same way and your Thor example is the blue sky in my response.

My focus is on establishing a reliable and structurally sound ethical framework. Arguments from false premises are, by definition, unreliable. Even if they occasionally stumble upon a correct outcome, their foundation is arbitrary and dangerous, as my 'new prophet' example showed. A sound ethics cannot be built on a foundation of lucky guesses.
You are absolutely right that many of life's greatest achievements require us to consciously accept and endure pain. A principle that couldn't account for the Olympic athlete would be useless. Fortunately, mine does.

My principle directs us to minimize both actual and unwanted harm and suffering, but it doesn't imply that all pain is the same (I simply didn't have room to point this out). We need to distinguish between pointless suffering and productive suffering—or what we might call an 'investment in well-being. Hopefully you can understand the distinction, let me know if you need me to define these terms.
The Olympic athlete is the ultimate example of someone managing all four clauses of the principle at once. They are not simply choosing pain over happiness. They are making a strategic trade-off:
  • They accept a measured amount of actual, temporary suffering (exhaustion, sore muscles, strict diets).
  • In order to maximize their potential and actual happiness and well-being (the fulfillment of victory, peak physical health, the discipline and self-knowledge gained).
Viewed holistically, the choice is simply a desire to create the 'best possible state of affairs' for themselves. They are not violating the directive to minimize suffering; they are wisely investing a smaller, controlled amount of suffering for an overwhelmingly positive return. The alternative—avoiding the temporary pain of training—would result in a massive net loss of potential happiness and fulfillment

Having a wisdom tooth pulled is a more visceral example. Obviously, there is pain involved, but that pain is welcomed because it is an investment in health and long term well-being.
Hopefully my corrections above answer this.
 
The hypothetical insists that the 10 million people feel "no anxiety or fear," but this certainty is a fragile illusion. The moment a society creates a mechanism to strip a person of their humanity and sacrifice them for the collective, that mechanism becomes a threat to everyone. The precedent is set.

The action is morally wrong not because of a simple, immediate calculation of fear. It is wrong because:
  • The "happiness" it generates is a corrupting force that harms the moral character of the entire population.
  • The suffering it inflicts is of an absolute, bottomless quality that outweighs any quantity of shallow pleasure.
  • The system itself is inherently unstable and guarantees a future filled with even greater atrocities

Now if you can create implausible hypotheticals, I can add plausible repercussions...
What happens when the public's excitement wanes and they demand more spectacular shows? The number of victims will need to increase. What happens when a member of the majority commits a crime or falls out of favor? The easiest solution will be to re-categorize them into the slave class. The very existence of such a system introduces a poison into the society's foundation. It creates a moral logic of disposability that is inherently unstable and will inevitably lead to more and greater suffering. This massive potential harm cannot be ignored.
A complete analysis of the outcomes reveals that this act does not create a better world. It creates a monstrous one, poisoned by cruelty and destined to consume itself. It is a catastrophic failure by any measure of total well-being.
 

One person's win at the Olympics is another person's loss. How much suffering do how many thousands of people endure for the benefit of one person to get that positive return of accomplishment? It seems like it might be a fairly decent tradeoff for an individual Nash equilibrium, but if your system evaluates morality based on optimizing net aggregate pain, and net aggregate happiness and fulfillment, we have to evaluate the total pain endured in the pursuit of the Olympics against the total units of happiness in aggregate to determine whether participating in the Olympics is immoral or not.


So? If people don't have an awareness of the threat, it won't cause them suffering, and if watching slaves forced to fight to the death brings them joy, then slave fights are morally right according to a philosophy of optimizing for the most units of happiness per unit of suffering.

The action is morally wrong not because of a simple, immediate calculation of fear. It is wrong because:
  • The "happiness" it generates is a corrupting force that harms the moral character of the entire population.

How are you evaluating whether it is a "corrupting force"? How are you evaluating the "moral character" of the entire population. If forcing a few slaves to fight to the death brings joy to millions at the expense of only a handful of people, then they don't harm the moral character of the population according to the moral standard of optimizing for the most units of happiness per unit of suffering.

The suffering it inflicts is of an absolute, bottomless quality that outweighs any quantity of shallow pleasure.

How is that being evaluated? It sounds downright deontological to me.

  • The system itself is inherently unstable and guarantees a future filled with even greater atrocities

It does not guarantee any future at all. The earth could get hit by a giant meteor that kills everyone before they can even collect their bets on the match. The system is designed to optimize per capita happiness-per-suffering for that population. Are you only thinking there is anything wrong with it because you are convinced that it isn't a good way to optimize per capita happiness-per-suffering for that population?

Suppose, hypothetically, that you were presented with overwhelming empirical evidence that the slave deathmatches were resulting in increased happiness, would you become convinced that they were morally right in that case?
 
One person's win at the Olympics is another person's loss.....
This interaction has been interesting, but this response is hard to take seriously and unless this conversation takes a turn for the interesting, this might be my last reply as you're just baiting me into relentlessly trying to force my attempt at a nuanced, holistic system into the box of a crude, simplistic "pleasure calculator." you've ignored my broader points about potential harm, moral character, and stability, and repeatedly demand I use your flawed, simplistic metric and I'm not falling for it.

So here is how I'll respond.

It seems all of your objections stem from a single, fundamental misinterpretation of my principle. You are consistently analyzing it as if it were a crude, moment-to-moment calculator for adding up isolated units of "joy" and subtracting "pain."

It is not.

My principle is a strategic guide for designing the best possible and most stable state of affairs for everyone—a flourishing society. A flourishing society is, by definition, one that is not only happy, but also just, empathetic, and resilient. This holistic view of the consequences is key.

When viewed through this proper lens, your objections resolve themselves:
  • On the Olympics: A flourishing society values the pursuit of excellence and the joy of competition. The productive suffering of the athletes and the shared joy of the spectators creates a massive net gain in well-being and inspiration for the whole society. It is not a zero-sum game.
  • On the "Corrupting Force": A society of cruel, empathy-deadened individuals is, by definition, not a flourishing society, regardless of the shallow "joy" they report from watching slaves get slaughtered. Describing this degradation of moral character isn't an external rule, it is a core part of the consequence analysis.
  • On "Absolute Suffering": Recognizing that the absolute terror of a deathmatch slave has a disproportionate and shattering impact on the total well-being of the universe is not "deontological." It's an honest accounting of the consequences. Some outcomes are so catastrophically negative they can't be balanced by any amount of shallow pleasure.
  • On "Instability": A flourishing society is, by definition, stable. We make moral decisions based on foreseeable risk and potential outcomes, not by pretending cosmic accidents make all prediction meaningless.
This brings me to your final, and most important, question. You ask if I would accept slave deathmatches if "overwhelming empirical evidence" showed a net happiness gain.

The ultimate test for the justice of any social system is, as I said, The Veil of Ignorance. What society would a rational person agree to if they did not know their place in it?

No rational person would ever agree to a society that contained even a 0.0001% chance of being a deathmatch slave. The risk is absolute and unacceptable. Therefore, such a society is fundamentally unjust and, by definition, cannot be the "best possible state of affairs."

The evidence you're asking me to imagine is a logical contradiction. It's like asking, "What if you had empirical evidence of a square circle?" A society built on a foundation of ultimate terror and injustice is not a happy or flourishing one.

The Veil of Ignorance is the test. If you can explain why a rational person should agree to your deathmatch society from behind that veil, then you will have a case. Otherwise, all these hypotheticals are simply attempts to justify a fundamentally broken system.
 
It is the ethical framework you described.

My principle is a strategic guide for designing the best possible and most stable state of affairs for everyone—a flourishing society.

That is no doubt the ambition of your ethical framework, but that does not mean it is effective to that end.

On the "Corrupting Force": A society of cruel, empathy-deadened individuals is, by definition, not a flourishing society

According to what definition?

Using the framework of optimizing for maximum aggregate happiness with minimum aggregate suffering, a cruel, empathy-deadened society is flourishing relative to empathy-rich societies where everyone suffers all the time because of their empathic sympathetic reaction to seeing other people suffer.


You could do that with any deontological principle "Recognizing that the [varDeontologicalPrinciple] has a disproportionate and shattering impact on the total well-being of the universe is not "deontological." It's an honest accounting of the consequences. Some outcomes are so catastrophically negative they can't be balanced by any amount of [varUtilitarianBenefit]."

A deontologist does not believe that any consequence justifies abandoning one's principles. That's what deontology is. If you don't recognize the possibility of any utilitarian benefit that could justify deathmatch slaves, then you have a fundamentally deontological stance on deathmatch slavery.


No rational person would ever agree to a society that contained even a 0.0001% chance of being a deathmatch slave.

Not very familiar with history are you? That's exactly the type of society that the Romans agreed to. Famous pioneers of rational thought agreed to just such a society.


The Veil of Ignorance is the test. If you can explain why a rational person should agree to your deathmatch society from behind that veil, then you will have a case.

If there were a doorway right next to me that had a 99.9999% chance of leading to billions of dollars worth of treasure, and a 0.0001% of leading to deathmatch slavery, I'd accept the risk.

I take a chance on dying horrifically in an auto accident every time I step into a motor vehicle. Folks take chances like that all the time, often for little potential reward.

I should think that any rational person seeking to maximize their happiness-per-suffering ratio would take the risk on Deathmatchslavistan so long as it offered the best median happiness-to-suffering ratio.
 
It is the ethical framework you described.
Then you are misunderstanding. I'm sure there are others that read it, that will understand.
You've asked for my definition of a flourishing society. It is defined not by an arbitrary rule, but by a rigorous analysis of long-term, sustainable outcomes. A flourishing society is one that is stable, resilient, and capable of progress. Your proposed "cruel, empathy-deadened society" fails on all three counts.

Your caricature of an "empathy-rich society" as one where "everyone suffers all the time" is a straw man. You have mistaken empathy, the essential tool for social cooperation, for a crippling pathology. Real empathy is not about shared suffering; it's about the understanding that enables trust and mutual support.

Let's look at the consequences:

Stability & Resilience: Empathy is the social lubricant that makes trust possible. High-trust societies are more stable, have lower transaction costs, and are more resilient in the face of crises. A cruel, empathy-deadened society is, by definition, a low-trust society. It is brittle, paranoid, and prone to violent collapse. Its foundation is rotten.

Progress & Problem-Solving: Humanity's greatest challenges (pandemics, resource management, existential threats) are all collective action problems. Solving them requires immense levels of trust and cooperation. An empathetic society is capable of this. Your cruel society, composed of individuals who feel no stake in the well-being of others, would be utterly incapable of organizing to solve any complex threat. It is a society destined for failure.

The Quality of Well-being: My principle is not about maximizing units of any simple pleasure. It is about creating the best possible state of affairs. The "joy" of a sadist watching a deathmatch is a shallow, brittle, and cancerous pleasure. The well-being derived from deep relationships, community, and security—all of which are products of empathy—is profound and durable and is consistent with a fundamental understanding of human behavior. A consequentialist analysis must prefer deep, sustainable well-being over shallow, corrupting pleasure.

Wrapping up...

So, according to what definition is a cruel society not flourishing? According to a definition based on its actual, predictable consequences based on real, tangible, testable observations of human behavior both past and going forward.

Your cruel society is not "flourishing." It is a failed state with a terminal illness, whose members are distracting themselves with shallow pleasures as their social fabric disintegrates. Its high "happiness score" is a fever, not a sign of health. An empathy-rich society is stable, adaptive, and filled with citizens capable of the deep well-being that comes from trust and connection.

If your tempted to ask me some version of, 'by whose definition is "deathmatch slavery" bad? In Sam Harris' words; "We've now hit philosophical bedrock with the shovel of a stupid question.".

A framework that truly optimizes for the best long-term outcomes, empathy is an essential functional requirement, and cruelty is a guarantee of catastrophic failure.

That is what my framework proposes. Not simple equations, but a social version of a hypothesis and an understanding that moral knowledge is never complete and should be constantly challenged.
 
You could do that with any deontological principle...[truncated for space]

This is the most interesting point you've made and this is fundamentally where we differ. You are right in your definitions: a deontologist adheres to principles regardless of consequences, and a consequentialist must derive their moral judgments from an analysis of those consequences.

Where we disagree is on what constitutes a full and "honest accounting of the consequences." Your argument assumes a simplistic model where "utilitarian benefit" is a single number that can be hypothetically inflated to justify anything. My framework is designed to be more realistic.

You ask if I recognize the possibility of a benefit that could justify deathmatch slaves. Within my consequentialist framework, that possibility is a logical impossibility. Here is why the consequences always result in a catastrophic net negative:

  1. The "Benefit" is a Consequentialist Poison: The "happiness" derived from sadism is a corrupting force by any metric and rational understanding of predictable human behavior at the macro scale. As a consequence, it degrades the moral and psychological health of the population, making them less capable of the trust, empathy, and cooperation necessary for a stable, flourishing society. A full accounting of consequences must include the quality of the society being created. This "benefit" is a cancer that guarantees future suffering.
  2. The "Cost" is Asymmetrical: As I've argued, my framework recognizes that profound suffering carries a vastly greater weight than shallow pleasure. The absolute, shattering, and total nature of the slaves' suffering is a consequentialist factor of such magnitude that it cannot be arithmetically balanced by any quantity of the corrupt "benefit" described above.
  3. The System is Inherently Unstable: The consequence of creating a mechanism for dehumanization is the guaranteed long-term collapse of social trust. This isn't a deontological rule; it's a sociological prediction based on observing how societies function.
So, when I say deathmatch slavery is always wrong, I am not stating an a priori deontological principle. I am stating an a posteriori consequentialist conclusion. I have analyzed the full spectrum of consequences, the psychological harm, the social instability, the corrupting nature of the pleasure, the disproportionate weight of the suffering and the result is a catastrophic failure every single time, regardless of the numbers you invent.

After observing that a specific action consistently and inevitably leads to a disastrous state of affairs, a rational consequentialist will treat that action as, for all practical purposes, forbidden. It becomes a reliable heuristic, or a "consequentialist rule."

Your template argument is clever, but it misses this crucial step. I don't just insert a principle and call its violation "shattering." I have provided a detailed, outcome-based analysis of why the consequences are shattering. A deontologist doesn't need that analysis; the rule is enough. For me, that multi-layered analysis of the consequences is the entire basis of the judgment.
 
The System is Inherently Unstable: The consequence of creating a mechanism for dehumanization is the guaranteed long-term collapse of social trust.

Compared to what? The Roman Empire stood for over 2000 years.

But if you are saying that your aversion to deathmatch slavery is based on empirical comparison to the stability, resiliency and capability of progress of societies that do not have any mechanism for dehumanization rather than simply recognizing that forcing slaves to fight to the death is some villainous scumnuggetry, then by all means, show your work.

Which dehumanization-free societies did you use for your comparison, and how much more stable, resilient, and capable of progress were they than the Roman Empire? How much longer were they able to maintain social trust?

I have analyzed the full spectrum of consequences,

Unless you are omniscient, you haven't analyzed the full spectrum of consequences.
 
Cookies are required to use this site. You must accept them to continue using the site. Learn more…