Your post reminds me of a paper written by Peter Singer (link to a YT video) that proposed that many common activities and expenditures, such as buying luxury goods or eating at expensive restaurants, are morally wrong if the money spent on them could have been used to save lives or alleviate extreme suffering. I think it could easily be applied to the idea you are proposing.There's a moral question I've been thinking about....
I think it's worth noting that I make a distinction between morals and morality. Morals are the rules that people believe that they should follow, where morality is the system of morals that are commonly accepted and enforced. Think about it like the rules football. Each player might have ideas about what rules should be changed, added or deleted, but when they play, they all agree to play by the shared set of rules. However, every person would benefit by ignoring the rules, but by and large most people agree to follow the rules, even though it's in each persons best interest not to do it, because if everyone cheats, the structure of the entire system fails, which makes everyone worse off.Now with that baseline, the question I've been mulling over is how (if at all) does the moral status of unsustainable contributions towards aggregate harm change when that or a similar harm becomes inevitable and avoidance efforts become pointless?
Think about how hard it is to create a set of rules that applies to a large group. Now if you have that group in a room, and everyone in the room sees everyone else agreement to the rules, they are more likely to agree. But getting people to agree to rules across states or even countries? That will take time. The Geneva Convention is an example, and so is the Paris Climate Accord.An extreme example would be discovering that an asteroid is going to obliterate human civilization in a few months; would it still be immoral to fly a private jet or own a superyacht after that discovery? Is there any point at which it becomes morally legitimate to decide that it's all screwed up anyway so there's no point in worrying about negligible 'contributions' to the inevitable? And if so how would we determine in less extreme scenarios where that point is?
Yes, for sure. I think most people would agree that the moral obligation to refrain from harm or refrain from evil is weightier than the moral obligation to render help or to actively do something good. Whether I watch a trapped kid drowning in a pool or I myself actively hold him under the water 'til he dies the practical outcome is the same, but I imagine most folk would agree that the active murder is at least a bit more reprehensible than the failure to render assistance. Singer's angle of an obligation to render help to the poor in an abstract sense is essentially arguing an aspect of Jesus' morality, derived from his understanding of "love your neighbour as you love yourself." For example as said by Luke's John the Baptist to the crowds who asked his guidance, "Anyone who has two shirts should share with the one who has none, and anyone who has food should do the same," which ultimately led to Jesus' followers going around with little more than the clothes on their backs or, in the watered-down version of Acts, communal modest living and sharing with all who had need.Your post reminds me of a paper written by Peter Singer (link to a YT video) that proposed that many common activities and expenditures, such as buying luxury goods or eating at expensive restaurants, are morally wrong if the money spent on them could have been used to save lives or alleviate extreme suffering. I think it could easily be applied to the idea you are proposing.
Not a lot to add to this right now, but agreed, those are definitely real issues! A third point (kind of a counterpoint to everything I wrote above) is that telling people that they are doing wrong runs a good chance of getting their hackles up, getting defensive... so perhaps encouraging them to do good is marginally, marginally more effective. Abstract principle and facts are usually the most compelling thing for me, but I'm certainly in the minority thereYou may know that the reason that commercials that ask for your assistance helping with starving people only show one or two people, because most people can comprehend that, but show them 1000 starving people and he moral compunction to help doesn't go up, it goes down! . . . .
I think another factor is chronic change vs acute change.
It seems fairly straightforward that if a set of actions has a predictable morally-negative consequence A, and my actions contribute a fraction B towards that set of actions or consequences, then however tiny that fraction may be my actions are immoral on the scale of AxB.
My understanding is that he did.I'd be surprised if Peter Singer lived quite that minimalistically
I think what you're talking about can better be referred to as Consequentialism, which is what "outcomes" are based on. Utility is more about pragmatic intent. And "scoped to predictability of an outcome" could be better defined as knowledge.Utilitarianism always has to be scoped to the predictability of an outcome.
Exactly, which is why no one would ever consider such an act in those terms, because air travel like many other human endeavors is a mix of good results and bad results the complexity of which is impossible to calculate. But, we can take a high level view. If air travel is perceived to create more positives than negatives (to the extent that question can be known (knowledge), then it's sum is considered positive.If all air travel stopped, it seems unlikely that the ensuing collapse of the world's economy would actually be a net benefit (though you never know)
What you're saying is that the action "X" can only be understood in the light of predictability, i.e the consequences, but I don't think there's any consequential requirement to utility,
Utilitarianism in its simplest form could be defined as; If we do "X" the sum of it's effect will be some positive, then it is worth doing.
The practical usefulness of utility could be measured that way, but then, so can every other moral theory,
but not everyone, including myself subscribes to the idea that an action in the moment is good only if the outcome it produces is good.
It seems fairly straightforward that if a set of actions has a predictable morally-negative consequence A, and my actions contribute a fraction B towards that set of actions or consequences, then however tiny that fraction may be my actions are immoral on the scale of AxB.
To know=knowledge. At the turn of the 19th century it was common for factories to pump toxic pollution into the air that people breathed and made them sick or killed them, but it was a lack of knowledge that led to the perception that it was 'good' to burn coal (or whatever) close to where people might breath it because the outcome was understood mostly in the context of benefits not it's drawbacks.
From an omniscient point of view, that kind of pollution was always bad, but from a human centric point of view it wasn't because there was a time when people didn't realize the harm and could only see the benefit because they lacked knowledge.
Thankyou for the reminder; I'd written a response to this at the time and then it got eaten so I ragequit. Pretty sure I read somewhere that this same argument was made in defence of slavery; if all the slaves were instantly freed, the economic chaos and retributive violence would surely result in far more deaths than there are slaves, so the least harmful option must be to keep upholding slavery. It was just as silly back then, too. Even in political terms it would at most be an argument in favour of this novel thing called having a plan, having a transition path between where you're at and where you ought to be. More to the point I'm talking about individual morality; it'd certainly be nice if public policy shared similar objectives, but the main question in my OP is based on the fact that with the recent fascist victory and likely end of American democracy that's obviously not going to be the case. There is literally zero risk that all or half or even a tenth of all air travel will disappear overnight out of a sudden wave of moral enthusiasm. We could similarly panic over what might happen if everyone gained the ability to kill someone just by wishing it: But back in the real world, the only harm that's in view is from climate breakdown, not from economic collapse caused by a sudden drop in air travel.Utilitarianism always has to be scoped to the predictability of an outcome. If all air travel stopped, it seems unlikely that the ensuing collapse of the world's economy would actually be a net benefit (though you never know) so it's hard to say that every person on every flight is contributing to net harm rather than to net benefit. Calculating whether an individual person on an individual flight is contributing or detracting from overall utility seems like it would be a bit complicated to calculate, and not nearly predictable enough to make any kind of reliable utilitarian judgement on.
Yes, we make our decisions with the information available to us. Surely you're not trying to suggest that there is any kind of perfection or omniscience or divine revelation behind some other approach to morality? I've never seen a deontic moral principle for which it's not necessary to carve out some major exceptions, and the reason it's necessary to do so is because of the consequences if we don't. "Don't do to others what they don't want done to them" - unless they're engaged in hurting someone, or they're a child, or they're a libertarian whinging about taxation. "Everyone has the right to life, liberty and security of person" - unless they're engaged in hurting someone, or they're a child (at least for liberty), or a ground invasion of the main islands would cost millions of Japanese and American lives. Duty-based and virtue-based approaches to morality certainly have their place to as simplifiers, gap-fillers and habit-formers, but it's difficult to even imagine a non-religious argument that the end in view for moral thinking and activity is ultimately based on the outcomes. I guess maybe the argument that it just feels good to think of ourselves as good people... but even that is really a kind of consequentialism, as are many religious views oriented towards heaven or anatta or nirvana if it comes to itThe premise obviously assumes some brand of Utilitarianism. And it runs afoul of the fundamental problem with all Utilitarianism. Utilitarianism among the nonomniscient must have limited scope. The trolly problem is straightforward for the Utilitarian who assumes uniform utility among all humans, but if the trolly is going to run over two people who are on their way to kill a dozen students in a school shooting, and you can switch tracks to run over one surgeon on his way to perform several life-saving operations, that changes the calculations a bit. But what if one of students would have been the next Hitler? What if one of the patients would have been the next Hitler?
Thankyou for the reminder; I'd written a response to this at the time and then it got eaten so I ragequit. Pretty sure I read somewhere that this same argument was made in defence of slavery; if all the slaves were instantly freed, the economic chaos and retributive violence would surely result in far more deaths than there are slaves, so the least harmful option must be to keep upholding slavery.
Even in political terms it would at most be an argument in favour of this novel thing called having a plan, having a transition path between where you're at and where you ought to be.
More to the point I'm talking about individual morality; it'd certainly be nice if public policy shared similar objectives, but the main question in my OP is based on the fact that with the recent fascist victory and likely end of American democracy that's obviously not going to be the case. There is literally zero risk that all or half or even a tenth of all air travel will disappear overnight out of a sudden wave of moral enthusiasm.
Yes, we make our decisions with the information available to us. Surely you're not trying to suggest that there is any kind of perfection or omniscience or divine revelation behind some other approach to morality?
I've never seen a deontic moral principle for which it's not necessary to carve out some major exceptions, and the reason it's necessary to do so is because of the consequences if we don't.
I apologize because I think I misunderstood you. Totally my fault. I try to be objective without preconceptions, but I realize I came into this with some, and that led me to pursue a response that likely felt like I was ignoring you.Which is to say that the worthiness of doing X should be evaluated according to the net consequences of X. Evaluating the utility of X is then entirely depended on knowing the consequences of X.
Yes.Presumably we all agree that something needs to be done about climate change, for example - nothing controversial about scientific facts, right?- but is there a personal moral obligation to reduce or limit one's own contributions to that aggregate harm?
Yeah, I don't really see this as a useful hypothetical situation. If an asteroid is going to obliterate the planet, I have a feeling that society will break down so fast that no one will be able to fly anyway.An extreme example would be discovering that an asteroid is going to obliterate human civilization in a few months; would it still be immoral to fly a private jet or own a superyacht after that discovery?
Not really, certainly not in any conditions we'd normally experience.Is there any point at which it becomes morally legitimate to decide that it's all screwed up anyway so there's no point in worrying about negligible 'contributions' to the inevitable?
I'd like to add a thought about this idea and hope that it makes sense.More to the point I'm talking about individual morality
Any ethical directive based on a false premise is wrong.
Once a moral directive has passed this factual test, we then apply the core utilitarian principle: Always attempt to minimize actual and potential harm, pain, and suffering; always attempt to maximize actual and potential happiness, health, and well-being.
Well said.Another aspect is promotion of values vs adherence to values. In theory, you could make an argument that it's more important to convince others to be environmentalists, and support green policies, than to live a green life. Again, that's because the damage one person can do, and ability of others to inspect their lives, is usually limited.
I see little reason to pay much attention to allegations of hypocrisy, as they are almost always made in bad faith. E.g. Greta Thunberg takes extreme steps to reduce her individual carbon footprint, and the deniers launch all sorts of ugly ad hominem attacks on her anyway.
That said, I do think that if you're promoting environmentalism, you ought to make a good faith effort to reduce your carbon footprint, because it's the right thing to do. I don't think that proclaiming "I'm green" gives you a hall pass to muck up the environment at will.
You're right that the action of not kicking puppies is morally correct. My principle is that the justification for that action is wrong. The full directive 'Don't kick puppies because of Thor' is a bad piece of moral reasoning. The directive is right, but not because the premise is true.I disagree. For example, if you have an ethical directive not to kick puppies because it will anger Thor, the directive not to kick puppies isn't necessarily wrong, just because the premise that it will anger Thor is false.
The principle is not a menu where one must choose between four competing options. The four clauses are intertwined aspects of a single goal: creating the best possible state of affairs for everyone to the extent that it's reasonably possible.Those look like six different principles to me, and some of them are mutually exclusive......
You're right that the action of not kicking puppies is morally correct. My principle is that the justification for that action is wrong. The full directive 'Don't kick puppies because of Thor' is a bad piece of moral reasoning. The directive is right, but not because the premise is true.
Let's show the absurdity of your argument.
A moral directive based on a false premise is structurally unsound and therefore wrong, even if it occasionally points to the right action. The problem with the 'because of Thor' premise is its unreliability.
What if a new prophet comes along tomorrow and says, 'Good news! I've just received word that Thor is no longer angered by puppy-kicking; in fact, he finds it hilarious!'?
According to the logic of the original directive as you've (mis)interpreted it, it would now be morally permissible, or even encouraged, to kick puppies. The directive flips because the false premise it rests on is arbitrary.
In other words, descriptive statements cannot confirm prescriptive statements, but they can disprove them. Any ethical directive based on a false premise is wrong.
The principle is not a menu where one must choose between four competing options. The four clauses are intertwined aspects of a single goal: creating the best possible state of affairs for everyone to the extent that it's reasonably possible.
That said I think your counter argument can easily be dismissed based on a well known quote that I think most people would agree with.
"It is better to have loved and lost, then to have never loved at all". - Alfred, Lord Tennyson
No one would want to live in a society where they could belong to a category of people that would be sacrificed for the happiness of others only on the justification that more people would be happy than unhappy as a result. The collective suffering of the anxiety of such a place would far out weigh any positives. This system of sacrifices would create fear, social mistrust and result in the moral corruption of society.
You are correct that Denying the Antecedent is a formal fallacy. However, that fallacy would only apply if I were making the argument you've laid out, which I am not, though I do appreciate you pointing out the potential ambiguity of the word 'wrong'. I will restate it to avoid any misunderstanding.That wasn't the logic you presented though. What you presented was that we can evaluate directive A as being a wrong directive if it is justified by untrue premise B.
If B is a true premise then A is a good directive. Premise B is untrue, therefore directive A is not a good directive. That is the logical fallacy of denying the denying the antecedent. A prescriptive statement cannot be disproven by disproving any particular justification for it, which is exactly what you seemed to be claiming when you said:
You are absolutely right that many of life's greatest achievements require us to consciously accept and endure pain. A principle that couldn't account for the Olympic athlete would be useless. Fortunately, mine does.You do though. Avoiding pain is often at odds with maximizing happiness. Sometimes you have to accept pain as the cost of happiness, self-actualization or other forms of fulfillment. If you want an Olympic gold medal, avoiding pain isn't really for you. You have to choose to accept pain as the cost of achieving your ambition. Which isn't necessarily an immoral choice.
Hopefully my corrections above answer this.That doesn't dismiss my counter argument, that dismisses your argument for minimizing pain and suffering. Lord Tennyson is clearly saying that optimizing for minimal pain and suffering is not the best policy. There are better things to optimize for. And as you astutely recognize, most folks tend to agree.
The hypothetical insists that the 10 million people feel "no anxiety or fear," but this certainty is a fragile illusion. The moment a society creates a mechanism to strip a person of their humanity and sacrifice them for the collective, that mechanism becomes a threat to everyone. The precedent is set.That's a nice sentiment, but let's just suppose, hypothetically, that folks aren't as enlightened and fairness-minded as you like to think they are.
Let's suppose for a moment that there are 10 million white people who genuinely have no anxiety or fear that they could ever belong to the category of people that would be sacrificed for the happiness of others. They wake up each morning with genuine excitement, counting down the days till the gladiatorial deathmatch where they get to watch the slaves fight to the death on pay-per-view.
Hypothetically, if enslaving 7 people and forcing them to fight to the death actually brought a lot of happiness to 10 million people, or 100 million, or a billion, would it be morally right to enslave them and force them to fight to the death on pay-per-view?
My principle directs us to minimize both actual and unwanted harm and suffering, but it doesn't imply that all pain is the same (I simply didn't have room to point this out). We need to distinguish between pointless suffering and productive suffering—or what we might call an 'investment in well-being. Hopefully you can understand the distinction, let me know if you need me to define these terms.
The Olympic athlete is the ultimate example of someone managing all four clauses of the principle at once. They are not simply choosing pain over happiness. They are making a strategic trade-off:
Viewed holistically, the choice is simply a desire to create the 'best possible state of affairs' for themselves. They are not violating the directive to minimize suffering; they are wisely investing a smaller, controlled amount of suffering for an overwhelmingly positive return. The alternative—avoiding the temporary pain of training—would result in a massive net loss of potential happiness and fulfillment
- They accept a measured amount of actual, temporary suffering (exhaustion, sore muscles, strict diets).
- In order to maximize their potential and actual happiness and well-being (the fulfillment of victory, peak physical health, the discipline and self-knowledge gained).
The hypothetical insists that the 10 million people feel "no anxiety or fear," but this certainty is a fragile illusion. The moment a society creates a mechanism to strip a person of their humanity and sacrifice them for the collective, that mechanism becomes a threat to everyone. The precedent is set.
The action is morally wrong not because of a simple, immediate calculation of fear. It is wrong because:
- The "happiness" it generates is a corrupting force that harms the moral character of the entire population.
The suffering it inflicts is of an absolute, bottomless quality that outweighs any quantity of shallow pleasure.
- The system itself is inherently unstable and guarantees a future filled with even greater atrocities
This interaction has been interesting, but this response is hard to take seriously and unless this conversation takes a turn for the interesting, this might be my last reply as you're just baiting me into relentlessly trying to force my attempt at a nuanced, holistic system into the box of a crude, simplistic "pleasure calculator." you've ignored my broader points about potential harm, moral character, and stability, and repeatedly demand I use your flawed, simplistic metric and I'm not falling for it.One person's win at the Olympics is another person's loss.....
It is the ethical framework you described.It seems all of your objections stem from a single, fundamental misinterpretation of my principle. You are consistently analyzing it as if it were a crude, moment-to-moment calculator for adding up isolated units of "joy" and subtracting "pain."
It is not.
My principle is a strategic guide for designing the best possible and most stable state of affairs for everyone—a flourishing society.
On the "Corrupting Force": A society of cruel, empathy-deadened individuals is, by definition, not a flourishing society
On "Absolute Suffering": Recognizing that the absolute terror of a deathmatch slave has a disproportionate and shattering impact on the total well-being of the universe is not "deontological." It's an honest accounting of the consequences. Some outcomes are so catastrophically negative they can't be balanced by any amount of shallow pleasure.
No rational person would ever agree to a society that contained even a 0.0001% chance of being a deathmatch slave.
The Veil of Ignorance is the test. If you can explain why a rational person should agree to your deathmatch society from behind that veil, then you will have a case.
Then you are misunderstanding. I'm sure there are others that read it, that will understand.It is the ethical framework you described.
You've asked for my definition of a flourishing society. It is defined not by an arbitrary rule, but by a rigorous analysis of long-term, sustainable outcomes. A flourishing society is one that is stable, resilient, and capable of progress. Your proposed "cruel, empathy-deadened society" fails on all three counts.According to what definition?
Using the framework of optimizing for maximum aggregate happiness with minimum aggregate suffering, a cruel, empathy-deadened society is flourishing relative to empathy-rich societies where everyone suffers all the time because of their empathic sympathetic reaction to seeing other people suffer.
You could do that with any deontological principle...[truncated for space]
The System is Inherently Unstable: The consequence of creating a mechanism for dehumanization is the guaranteed long-term collapse of social trust.
I have analyzed the full spectrum of consequences,
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?