[Previous] Linux Not So Easy | Home | [Next] Morality Is Not For God

Morality

Caeli: Hi!
Elliot: Hi, Caeli.
Caeli: Will you tell me about morality?
Elliot: Morality is an area of knowledge. It includes theories about how to live well, and how to make good choices, and what's right and wrong and good and evil. You could also call morality the theory of decision making.
Caeli: How can we determine what is moral, or not?
Elliot: For a lot of questions, we don't have to figure it out. We already know. We know stealing is wrong, and murder is wrong, and being kind to our friends is right. The usual thing to do is to use the knowledge we already have. We don't have to justify it. All we have to do is be willing to improve on it if it seems to have problems or flaws. But as long as it seems to work, then philosophically there's nothing wrong with using it even if we can't prove it's right.
Caeli: Then how will our moral knowledge get better? Do we really have to wait for problems -- for things to go wrong -- before we can fix them?
Elliot: No. If you want to, you can think about morality. If you do thought experiments, and imagine situations, and what you'd do in them, you can find problems now. And if you're suspicious there might be a problem, or just curious, then go right ahead and look for improvements.
Caeli: What else will help?
Elliot: It is good to fit theories into more general frameworks, or alter them to be more universal, or connect them with other ideas we have. Doing this is interesting in its own right; it's part of how we learn. But it also helps us correct errors. When our ideas don't mesh together nicely, that is a sign they could be improved. And the more generally they apply, the more varied examples we can try them out on, which will help reveal hidden flaws.
Elliot: Note that all this is exactly the same way that we would approach any other topic.
Caeli: What if I do want to know justifications for morality? Why are things moral?
Elliot: Well that's tricky, and we don't know the whole answer. But to start with, morality is the right way to live. That means it works well, in terms of whatever criteria are important. We have a lot of ideas about what those criteria are, like happiness, wealth creation, freedom, scientific achievement, or creating a lasting and valuable political tradition such as the United States government. And there are smaller things we value, like helping a friend, or teaching our child something he wanted to know, or cooking a tasty meal, or eating a tasty meal. Some of the things I just said might be wrong. Maybe they aren't so good after all. One of the things morality is about is figuring out which are right.
Caeli: I'm getting a clearer picture, but what about the foundations? What do we justify moral ideas in terms of?
Elliot: Let me be very clear and say again that we do not have to justify ourselves. And the whole idea of foundations is confused: we never discover ultimate, final, true foundations upon which we can never improve. There are always more subtle problems that we can work on.
Elliot: And knowing the correct foundations, or the reasons for things, is often not very helpful. Just because you discover how your fear of spiders came into existence doesn't necessarily mean you're any closer to getting rid of the phobia.
Elliot: Note, again, that this is the same for other topics besides morality.
Elliot: But with that said, it is interesting to think about why things are right. We have a lot of answers to that, but they aren't very well connected, and we could do with some deeper truths. I do have thoughts about that. I think I can answer your question to your satisfaction.
Caeli: That sounds good. Go ahead.
Elliot: What does morality consist of? Well, it's not supernatural. And it's not from God. What's left? It must come from physics, logic, and epistemology.
Caeli: What's epistemology?
Elliot: It is knowledge about knowledge. It answers questions like how we learn. But it doesn't just apply to humans. Lots of things contain knowledge. The obvious example is books or computers. Those contain the knowledge we put into them. Animals and plants also contain knowledge. Or perhaps saying they express knowledge would be clearer. I don't mean they have a compartment inside in which the knowledge is stored, though they do have DNA. But consider a tree. It expresses knowledge about how to turn sunlight into energy. A wolf expresses knowledge about how to hunt prey. There's also less obvious human-made examples, like a table embodies knowledge of how to keep items in locations that are convenient for humans.
Caeli: Alright, I get the idea. You mean knowledge very broadly.
Elliot: Yes.
Caeli: Isn't epistemology a type of logic?
Elliot: Yes, you can think of it that way. In that sense, math is logic as well. And how to argue is a matter of logic. And how to lie with statistics is a matter of logic. I consider epistemology important enough to mention by name.
Caeli: Is logic part of physics?
Elliot: I don't know. But I do know that brains are physical objects, so our knowledge of logic comes only through physical processes, which we know about through physics.
Caeli: Alright, so morality consists of physics, logic, and epistemology. Now what?
Elliot: This might appear completely useless. It's a bit like saying a computer consists of atoms. Yes, it does. But that doesn't tell us anything about how it works.
Caeli: It's reductionist.
Elliot: Yes. But we can move on from here. Morality is going to let people get good things. Let's ignore what the things are for now, and consider the getting. How do people accomplish their goals and get what they want? What will let them do that?
Caeli: Hey, it seems like we are getting somewhere already.
Elliot: Yes. So we're going to want power, in a very general sense. The more humans can shape reality, and have the power to get what they want, the more they will be able to get good things, whatever those are.
Elliot: Second is knowledge. People will need to know what is good or they might accomplish the wrong things.
Caeli: No wonder you mentioned epistemology in particular.
Elliot: Third is error correction. People might make mistakes while getting these good things, or they might be mistaken about what is good. So we're going to need to be able to deal with that and fix mistakes.
Elliot: Fourth is consistency. If people try for contradictory things, that won't work. Another way to say this is not to be self-defeating. If you're trying to get two different things, but it's not possible to get both, then you're bound to fail.
Caeli: This is cool so far.
Elliot: So these are our ingredients to build with. Now keep in mind that I could have named other ingredients. It isn't very important. There are a lot of ways to cover the same general ideas and name them different things.
Caeli: Alright, so what's the next step?
Elliot: Next we will do a thought experiment.
Elliot: The following is partly due to David Deutsch. It was his idea that for almost all practical purposes, it does not matter what the foundations of morality are, so long as you take morality seriously and apply it universally. And it was his idea to apply this to a morality based on squirrels.
Elliot: So, we haven't said what the good stuff people are trying to get is. Let's imagine the answer is maximizing the number of living squirrels, and see what happens.
Caeli: Isn't that absurd? And easily variable: why not bison?
Elliot: Yes it's absurd. But the consequences are interesting anyway. I don't want to give away the ending, so let's keep going.
Caeli: Let's clarify first. Do we want to maximize the number of squirrels today, or how do we count?
Elliot: The goal is the most squirrels at the most times. Take the average number of live squirrels at any given time, since the universe started, until the present, and that's your current score. The goal is to increase the number as high as possible.
Caeli: So should I start a squirrel farm and raise squirrels?
Elliot: Heavens no! Squirrels are dirty rodents. They might have diseases. You won't be able to sell them. Maybe you could get the government to pay, but then you'll be beholden to them.
Caeli: It sounds like you don't like squirrels. But for the sake of the thought experiment, shouldn't you pretend that you do?
Elliot: No. That isn't one of the things we're imagining.
Caeli: But we are considered a squirrel-based morality.
Elliot: We want to maximize their number over long timeframes. We don't have to like them.
Caeli: Won't liking squirrels help us treat them better, and increase their population?
Elliot: So, you build a squirrel farm. You have thousands of squirrels for tens of years. You increase the squirrel score a tiny fraction of a point.
Caeli: Isn't that better than nothing?
Elliot: It's not good enough to get a score of one.
Caeli: What should we do instead?
Elliot: Plan for the long term. The first thing to worry about is that a meteorite, exploding sun, or other large scale disaster wipes out humanity or squirrels or both. Making sure that doesn't happen is far more important than any farm. So we should focus on science before farms.
Caeli: That's counter-intuitive.
Elliot: Next, I'm worried about nuclear war, terrorists, and large problems here at home. They probably wouldn't be able to destroy us entirely, but they'd set us back and set science back. So we need good diplomacy and foreign policy to protect our scientific research.
Caeli: OK, that makes sense.
Elliot: And we need a powerful economy in order to produce materials for setting up millions of squirrel farms (or more). So capitalism and free trade are important.
Caeli: Hey, now we're getting somewhere, you've actually mentioned squirrels again.
Elliot: Yeah. Now where should we put these farms? On Earth, they'll just get in the way of people. Squirrels are dirty rodents that no one likes, and we need happy people to do science and capitalism.
Caeli: Shouldn't people change to like squirrels? Squirrels are the focus of morality!
Elliot: I don't see any need for that. We're going to put the squirrel farms on other planets. Humans and squirrels don't need to share any planets.
Caeli: Don't people like to have some squirrels around?
Elliot: Yes, I suppose so. Our dogs need something to chase. So we'll have squirrels in parks still. The point is we don't need to locate any squirrel farms on Earth. We want human civilization concentrated to reduce travel time.
Caeli: So your general idea is that the best way to maximize squirrels is to work on science, diplomacy, capitalism, and normal things -- the same things people care about today in real life -- and then eventually, when we are powerful enough, to colonize other planets with squirrels?
Elliot: Yes.
Caeli: Is there anything we should do differently? Maybe farmers shouldn't shoot squirrels.
Elliot: If a farmer shoots a squirrel, we get a tiny reduction in our squirrel score. If a farmer is unhappy because he didn't get to shoot a squirrel, we get a reduction in farm productivity, which will delay the squirrel colonies. Every day those are delayed, with their trillions (or whatever) of squirrels, counts for a lot.
Caeli: Will we colonize the moon with squirrels, or Mars?
Elliot: No, human colonization will come first. That will help us get the raw materials and production plants needed for a truly massive squirrel colonizing effort later.
Caeli: When will we finally make a lot of squirrels?
Elliot: Basically, once it's easy.
Caeli: Should we at least do it slightly before then. Perhaps as soon as we could do it universe-wide, instead of when it's easy to do it universe-wide?
Elliot: That's a good question. But we need to be able to do it reliably. If we barely have enough resources, and we're stretched thin, then that's very risky. Something could go wrong. When it's easy is approximately when the risks will be gone.
Caeli: OK, that makes sense. We need to get this right. We wouldn't want all the colonies to die because we made a mistake.
Caeli: I guess reliability is very important. If our ultimate goal is lots of squirrels, we should do everything we can to make absolutely sure that that happens. So, what if people forget about the plan to colonize planets with squirrels? Or change their minds about the squirrel mission?
Elliot: That's a good question. And the solution is to have institutions in our society for error correction. What that means is we must have lots of criticism, all the time. In an environment heavy on criticism, bad ideas are refuted, so no rival theories will ever be able to challenge the squirrel theory. If the criticism ever went away then what would matter is things like how easy a theory is to remember, not whether it's true. So then squirrels might lose. Criticism is their best defense against false ideas.
Caeli: Won't there also be criticism of the squirrel theory.
Elliot: Yes. But so what? It's true, so it will survive the criticism. Whatever question you ask of it, it will have an answer. And all arguments will eventually lead people to squirrels.
Elliot: As a bonus, institutions of criticism like this have the happy property that if the squirrel theory is not right, or we've slightly misunderstood it, or whatever, then that error will be corrected.
Caeli: What if we just entrench the squirrel theory. We'll indoctrinate our kids with it. Everyone will be required to believe it. I know it's heavy handed, but squirrels are worth it. Won't that be even more reliable? People can be stupid and might not understand the squirrel theory's brilliance, even though we know it's true.
Elliot: That would not work reliably at all. People might start to question the indoctrination. Or they might be indoctrinated with something else. Institutions might change over time. Preventing that is very hard. Or our civilization might go extinct because it has a static culture and can't do science. Or we might just never reach the stars and make the colonies it dreams of: we aren't indoctrinated with how to accomplish our mission, and indoctrinated people don't think freely so might not invent the answers.
Elliot: The one and only advantage squirrel theory has over rival moralities is that it's true. That does not mean it indoctrinates people better. So the only reliable thing to do is play to our strength and use criticism and persuasion.
Caeli: That makes sense, and it also is a much nicer way to live. We get to think instead of be taught to mindlessly obey.
Elliot: Yes :)
Elliot: Another thing to consider is that institutions of criticism, which keep the squirrel theory prominent, are far more important than actually creating lots of squirrels. If we neglect the squirrel project, we'll be led back to it. People will argue that we aren't making enough squirrels, and we'll change our policies. But if we ever neglect our institutions of error correction and criticism, then no matter how many squirrels we already have, we might stop caring about them and throw away the project overnight.
Caeli: OK, I think I've got the idea now. So, what's the overall point?
Elliot: This way of thinking applies to more than squirrels. Take any pattern of atoms, and make the goal to spread it across the universe, and what we'll need to do is maximize human power first, and then when we're ready, spread it in a stable, reliable, risk-free way. (Note: for squirrels, the pattern is not a single squirrel, it's a habitat with many squirrels, oxygen, water, and food.)
Elliot: So for any goal like that, we should ignore the goal and focus on human power. We need to enable ourselves first. And we need to learn how to accomplish the goal, and avoid mistakes, so knowledge and error correction come in there. And we wouldn't want to start a campaign to ban space flight, or science: that'd be inconsistent with our goal.
Caeli: OK, I see how all goals like that are best accomplished with the four ingredients you mentioned earlier.
Elliot: Amusingly, the goal of minimizing the number of squirrels also has very similar steps to maximizing squirrels. We need human power to reliably keep squirrels extinct, and make sure aliens never create any, and make sure a terrorist doesn't build a squirrel, and make sure squirrels never evolve again somewhere. So we must monitor the whole universe vigilantly, and we must keep the eradication of squirrels alive in public debate. Everything is the same, except what we do at the end.
Elliot: So, if the basis of morality is squirrels, or bison, or crystals, and we think carefully enough about what to do, then what we'd end up with is almost exactly the same morality that people believe in today: we'd first value human happiness, freedom, science, progress, peace, wealth, and so on. The only difference would be one extra step, much later in time, where we'd fill most of the universe with squirrels or bison or whatever.
Caeli: That's interesting.
Elliot: So each theory of morality is partly the same, and partly different. The part that's different could be maximizing squirrels. It could be maximizing bison. It could be minimizing squirrels. That part is easily variable, which makes it a bad explanation. But the other parts, about knowledge creation, wealth, happiness, human power, and freedom are all constant. They seem more universal. They at least universally apply to moral theories that minimize or maximize things (which includes any sort of utilitarianism).
Caeli: So, it doesn't really matter what the basis for morality is?
Elliot: Exactly. And suppose we thought the basis for morality was squirrels, but we were wrong. This would not cause any significant problems. We'd end up doing the right thing for now, and learning of our mistake long before we actually filled the universe with squirrels.
Caeli: OK. I think I'm getting the idea. But can you clarify how things like science follow from our ingredients?
Elliot: Yes, certainly. Science helps increase our knowledge. And this understanding of reality helps us better avoid errors. Power to shape reality comes from knowledge, but also from having great tools, and having resources. So we want robots, computers, factories, brooms, freezers, toilets, and so on. How do we get these? Capitalism. Free trade.
Caeli: What about fair trade, communism, and so on?
Elliot: Some people think those are what we need. It doesn't really matter to my point. We need a good economic system, whichever one it is. And we already know how to argue about which is good. We have lots of professional economists, and philosophers, who know about this. And we have lots of good books about it.
Caeli: OK. So go on.
Elliot: Acting consistently, and avoiding self-defeating policies, is a matter of knowledge too. If we wanted, we could boil it all down to knowledge, and useful physical manifestations of that knowledge. But it's better to go the other way, and boil it up to freedom, science, and so on.
Caeli: Why is freedom important?
Elliot: Thinking freely means there aren't any good ideas that are being automatically ruled out. And it means being free to question any ideas we already have, so we can find errors in them. Living freely means being able to shape our part of the universe in the best ways for us. And when everyone agrees about freedom, there will be no wars, and no fighting. Everyone will work on their own goals, and no one will mind, and no one will want to control others. Even if they disagree.
Caeli: So let me try to summarize the structure of your argument.
Elliot: Go for it.
Caeli: There are certain ingredients that help us get what we want, whatever that may be. And it turns out that what specifically we want is not critical: the way to get it will be about the same regardless. In short, the ingredients are knowledge and human power. But they imply valuing science, freedom, wealth, and roughly the same things we value today.
Elliot: That's right. So there you go. A justification of the morality we already know, from simple principles.
Caeli: What about rationality? I don't believe you mentioned it.
Elliot: You're right. We've been looking at issues on a large scale. How individuals should make choices is an important matter too. Rationality is one of the things I advocate they strive for. I'll tell you about individual-sized morality tomorrow, OK?
Caeli: Yes. :) Bye.

Elliot Temple on October 22, 2006

Messages (11)

https://blog.jaibot.com/the-copenhagen-interpretation-of-ethics/

Good post about morality. I agree with the main point – you shouldn't be immune from criticism for being uninvolved, or attacked for not doing more when you make things somewhat better.

There is at least one important relevant factor in the drowning child example, which differentiates the child drowning nearby from the kids who die in Africa. Can you think of any?


curi at 7:15 PM on July 14, 2019 | #13096 | reply | quote

One relevant factor:

I have high confidence that if I jump in and save the child drowning nearby, the child actually gets saved.

I don't have high confidence that if I send $5 to some charity that claims to be saving children in Africa, a child actually gets saved. I think it's likely that my money gets stolen by the thugs who are part of the reason the children are in the position of needing saving.


Andy at 6:38 AM on July 15, 2019 | #13098 | reply | quote

That's a good factor.

The one I had in mind last night is that only the people in the area can help the drowning child. It's time-sensitive and location-sensitive. You might literally be the only person who can save him.

With spending $5 to help an African, their own parents could do it. Their own government could do it. Anyone could from anywhere in the world over a a period of, say, 5 years for that particular kid.

Time-sensitive, local help is *irreplaceable*. It's important that it exists because there's no other way to get the same kinda help. And you can't blame the parents for not getting jobs or whatever, even very responsible and moral parents could need this kind of help.

Also, not that it's all that important, but if you save a white, Christian, middle class American kid, his parents would totally give you a $100 reward. It'd be socially weird but they would. If you bring up a reward, they might be confused at first because the kid's life is arguably worth, say, 100k, but they don't want to pay that much. And when the stakes are so big, it seems weird to ask for $20 or something trivial, like why even bother, do you actually care about a little money that much? But if you make it clear you just want a little money for your trouble for the time it took you and the damaged shirt, or whatever, they will pay you, no big deal. The service you did is worth money to them which they have. They can actually pay you for the time and effort and property damage to your clothing, so that you don't come out behind. That might be more common if drowning children were more common, but currently they are such a rare issue that people aren't too worried about rewards. It's not like it's dangerous to save a drowning child in general. Not if it's like a little child in a pool.

You have to be really fucking careful trying to save a drowning adult who is anywhere near your own strength or weight though, FYI. Lifeguards have actual training for that. Drowning people do not necessarily help or cooperate and can pull you under too. It's also more dangerous if you have to jump off some tall rocks or it's an ocean with waves or it's a Florida swamp with potential gators or various other factors.


curi at 1:11 PM on July 15, 2019 | #13102 | reply | quote

WRT the drowning child: I think there's social status to be gained from having personally jumped in and saved a kid from drowning. I don't think that's why most people would do it in the first place, but once they've done it then the social status gain becomes a factor that they consider.

Some, all, or even more than all of that gain could be lost if you ask for a reward.

If you ask for a large reward you're both not likely to get it and you could end up with lower social status than you started with. I think that's the Copenhagen Interpretation of Ethics again - you saved the kid but in the process were willing to impoverish the parents. So you didn't help enough.

I think most people don't ask for even a small reward because they value the social status that comes from not asking more than the amount they could get by asking. The social narrative of saving a drowning kid and not asking for anything = hero. The social narrative of saving a drowning kid + asking for $100 to replace your shirt = petty.


Andy at 2:27 PM on July 15, 2019 | #13105 | reply | quote

What is the trait?

What is your answer to #namethetrait ?

https://youtu.be/1t1Vvc6IQD8


Considering Veganism at 3:17 PM on November 6, 2019 | #14206 | reply | quote

#14206 Watched 30s. I think I got the basic idea. An article would be better but the doc on screen helped.

The trait that morally differentiates humans from animals is being universal knowledge creators.

FYI universality is in David Deutsch's books. I don't know what background you're coming from (lots of people here have already read those).


curi at 3:22 PM on November 6, 2019 | #14208 | reply | quote

What are the main similarities and differences between rule-consequentialism and deontology?

Which of those 2 do you find to be a stronger argument and how is it different compared to morality based on CR?


The Rat at 9:06 AM on November 9, 2019 | #14256 | reply | quote

That sort of moral philosophy is stupid. How can you judge principles without considering their consequences? How can you judge consequences without any principles to evaluate or interpret them?

It's like, in science, splitting up data/evidence (consequences) and theories/explanations (principles). Both sides of the split are broken because they're too incomplete.

rule consequentialism = we'll judge theories based on data. they don't seem to get that you need theories to interpret data and the same data can be interpreted in different ways by different theories, in addition to not getting that you can have a theory criticize a theory.

It's kinda like choosing between mind or body. Why try to pick one? What for? The whole thing is trying to solve a non-problem. The problem is like "What one thing should we base morality on?" But why not two or three or even a non-foundationalist approach? No answer. I think it's because they believe principles like "don't lie" clash with consequences like "the murderer finds his victim because i told the truth about the victim's location", and so you have to take sides because there's a contradiction. But this is a contradiction between 1) *bad*, naive, simplistic principles, not good ones (problem solved with better principles) 2) results/data as interpreted by different principles (common sense, cultural defaults, which they don't realize they are using and don't examine, critically consider, write down, etc.)


Howard Roark at 12:55 PM on November 9, 2019 | #14257 | reply | quote

#14257 Thanks, Howard.

I think rule-consequentialism does have principles (rules) that are evaluated based on their consequences.

Rule-consequentialism claims that an act is permissible if and only if it is allowed by a code that could reasonably be expected to result in as much good as could reasonably be expected to result from any other identifiable code. [See Brad Hooker's version of rule consequentialism]

I think rule consequentalism does run into the same problem as deontology, in that you end up following a rule blindly into sometimes doing what would be intuitively thought as morally atrocious behavior. Like not lying to a murderer.

They both seem to be extremely rigid in their application, they just happen to differ in their formulation.

I think I prefer rule consequentialist over deontology. Simply because it does take consequences into account when creating its rules. I dislike that if followed to its logical extreme you end up justifying what I would consider immoral actions.


The Rat at 1:11 PM on November 9, 2019 | #14258 | reply | quote

Want to discuss this? Join my forum.

(Due to multi-year, sustained harassment from David Deutsch and his fans, commenting here requires an account. Accounts are not publicly available. Discussion info.)