Morality

Caeli: Hi!
Elliot: Hi, Caeli.
Caeli: Will you tell me about morality?
Elliot: Morality is an area of knowledge. It includes theories about how to live well, and how to make good choices, and what's right and wrong and good and evil. You could also call morality the theory of decision making.
Caeli: How can we determine what is moral, or not?
Elliot: For a lot of questions, we don't have to figure it out. We already know. We know stealing is wrong, and murder is wrong, and being kind to our friends is right. The usual thing to do is to use the knowledge we already have. We don't have to justify it. All we have to do is be willing to improve on it if it seems to have problems or flaws. But as long as it seems to work, then philosophically there's nothing wrong with using it even if we can't prove it's right.
Caeli: Then how will our moral knowledge get better? Do we really have to wait for problems -- for things to go wrong -- before we can fix them?
Elliot: No. If you want to, you can think about morality. If you do thought experiments, and imagine situations, and what you'd do in them, you can find problems now. And if you're suspicious there might be a problem, or just curious, then go right ahead and look for improvements.
Caeli: What else will help?
Elliot: It is good to fit theories into more general frameworks, or alter them to be more universal, or connect them with other ideas we have. Doing this is interesting in its own right; it's part of how we learn. But it also helps us correct errors. When our ideas don't mesh together nicely, that is a sign they could be improved. And the more generally they apply, the more varied examples we can try them out on, which will help reveal hidden flaws.
Elliot: Note that all this is exactly the same way that we would approach any other topic.
Caeli: What if I do want to know justifications for morality? Why are things moral?
Elliot: Well that's tricky, and we don't know the whole answer. But to start with, morality is the right way to live. That means it works well, in terms of whatever criteria are important. We have a lot of ideas about what those criteria are, like happiness, wealth creation, freedom, scientific achievement, or creating a lasting and valuable political tradition such as the United States government. And there are smaller things we value, like helping a friend, or teaching our child something he wanted to know, or cooking a tasty meal, or eating a tasty meal. Some of the things I just said might be wrong. Maybe they aren't so good after all. One of the things morality is about is figuring out which are right.
Caeli: I'm getting a clearer picture, but what about the foundations? What do we justify moral ideas in terms of?
Elliot: Let me be very clear and say again that we do not have to justify ourselves. And the whole idea of foundations is confused: we never discover ultimate, final, true foundations upon which we can never improve. There are always more subtle problems that we can work on.
Elliot: And knowing the correct foundations, or the reasons for things, is often not very helpful. Just because you discover how your fear of spiders came into existence doesn't necessarily mean you're any closer to getting rid of the phobia.
Elliot: Note, again, that this is the same for other topics besides morality.
Elliot: But with that said, it is interesting to think about why things are right. We have a lot of answers to that, but they aren't very well connected, and we could do with some deeper truths. I do have thoughts about that. I think I can answer your question to your satisfaction.
Caeli: That sounds good. Go ahead.
Elliot: What does morality consist of? Well, it's not supernatural. And it's not from God. What's left? It must come from physics, logic, and epistemology.
Caeli: What's epistemology?
Elliot: It is knowledge about knowledge. It answers questions like how we learn. But it doesn't just apply to humans. Lots of things contain knowledge. The obvious example is books or computers. Those contain the knowledge we put into them. Animals and plants also contain knowledge. Or perhaps saying they express knowledge would be clearer. I don't mean they have a compartment inside in which the knowledge is stored, though they do have DNA. But consider a tree. It expresses knowledge about how to turn sunlight into energy. A wolf expresses knowledge about how to hunt prey. There's also less obvious human-made examples, like a table embodies knowledge of how to keep items in locations that are convenient for humans.
Caeli: Alright, I get the idea. You mean knowledge very broadly.
Elliot: Yes.
Caeli: Isn't epistemology a type of logic?
Elliot: Yes, you can think of it that way. In that sense, math is logic as well. And how to argue is a matter of logic. And how to lie with statistics is a matter of logic. I consider epistemology important enough to mention by name.
Caeli: Is logic part of physics?
Elliot: I don't know. But I do know that brains are physical objects, so our knowledge of logic comes only through physical processes, which we know about through physics.
Caeli: Alright, so morality consists of physics, logic, and epistemology. Now what?
Elliot: This might appear completely useless. It's a bit like saying a computer consists of atoms. Yes, it does. But that doesn't tell us anything about how it works.
Caeli: It's reductionist.
Elliot: Yes. But we can move on from here. Morality is going to let people get good things. Let's ignore what the things are for now, and consider the getting. How do people accomplish their goals and get what they want? What will let them do that?
Caeli: Hey, it seems like we are getting somewhere already.
Elliot: Yes. So we're going to want power, in a very general sense. The more humans can shape reality, and have the power to get what they want, the more they will be able to get good things, whatever those are.
Elliot: Second is knowledge. People will need to know what is good or they might accomplish the wrong things.
Caeli: No wonder you mentioned epistemology in particular.
Elliot: Third is error correction. People might make mistakes while getting these good things, or they might be mistaken about what is good. So we're going to need to be able to deal with that and fix mistakes.
Elliot: Fourth is consistency. If people try for contradictory things, that won't work. Another way to say this is not to be self-defeating. If you're trying to get two different things, but it's not possible to get both, then you're bound to fail.
Caeli: This is cool so far.
Elliot: So these are our ingredients to build with. Now keep in mind that I could have named other ingredients. It isn't very important. There are a lot of ways to cover the same general ideas and name them different things.
Caeli: Alright, so what's the next step?
Elliot: Next we will do a thought experiment.
Elliot: The following is partly due to David Deutsch. It was his idea that for almost all practical purposes, it does not matter what the foundations of morality are, so long as you take morality seriously and apply it universally. And it was his idea to apply this to a morality based on squirrels.
Elliot: So, we haven't said what the good stuff people are trying to get is. Let's imagine the answer is maximizing the number of living squirrels, and see what happens.
Caeli: Isn't that absurd? And easily variable: why not bison?
Elliot: Yes it's absurd. But the consequences are interesting anyway. I don't want to give away the ending, so let's keep going.
Caeli: Let's clarify first. Do we want to maximize the number of squirrels today, or how do we count?
Elliot: The goal is the most squirrels at the most times. Take the average number of live squirrels at any given time, since the universe started, until the present, and that's your current score. The goal is to increase the number as high as possible.
Caeli: So should I start a squirrel farm and raise squirrels?
Elliot: Heavens no! Squirrels are dirty rodents. They might have diseases. You won't be able to sell them. Maybe you could get the government to pay, but then you'll be beholden to them.
Caeli: It sounds like you don't like squirrels. But for the sake of the thought experiment, shouldn't you pretend that you do?
Elliot: No. That isn't one of the things we're imagining.
Caeli: But we are considered a squirrel-based morality.
Elliot: We want to maximize their number over long timeframes. We don't have to like them.
Caeli: Won't liking squirrels help us treat them better, and increase their population?
Elliot: So, you build a squirrel farm. You have thousands of squirrels for tens of years. You increase the squirrel score a tiny fraction of a point.
Caeli: Isn't that better than nothing?
Elliot: It's not good enough to get a score of one.
Caeli: What should we do instead?
Elliot: Plan for the long term. The first thing to worry about is that a meteorite, exploding sun, or other large scale disaster wipes out humanity or squirrels or both. Making sure that doesn't happen is far more important than any farm. So we should focus on science before farms.
Caeli: That's counter-intuitive.
Elliot: Next, I'm worried about nuclear war, terrorists, and large problems here at home. They probably wouldn't be able to destroy us entirely, but they'd set us back and set science back. So we need good diplomacy and foreign policy to protect our scientific research.
Caeli: OK, that makes sense.
Elliot: And we need a powerful economy in order to produce materials for setting up millions of squirrel farms (or more). So capitalism and free trade are important.
Caeli: Hey, now we're getting somewhere, you've actually mentioned squirrels again.
Elliot: Yeah. Now where should we put these farms? On Earth, they'll just get in the way of people. Squirrels are dirty rodents that no one likes, and we need happy people to do science and capitalism.
Caeli: Shouldn't people change to like squirrels? Squirrels are the focus of morality!
Elliot: I don't see any need for that. We're going to put the squirrel farms on other planets. Humans and squirrels don't need to share any planets.
Caeli: Don't people like to have some squirrels around?
Elliot: Yes, I suppose so. Our dogs need something to chase. So we'll have squirrels in parks still. The point is we don't need to locate any squirrel farms on Earth. We want human civilization concentrated to reduce travel time.
Caeli: So your general idea is that the best way to maximize squirrels is to work on science, diplomacy, capitalism, and normal things -- the same things people care about today in real life -- and then eventually, when we are powerful enough, to colonize other planets with squirrels?
Elliot: Yes.
Caeli: Is there anything we should do differently? Maybe farmers shouldn't shoot squirrels.
Elliot: If a farmer shoots a squirrel, we get a tiny reduction in our squirrel score. If a farmer is unhappy because he didn't get to shoot a squirrel, we get a reduction in farm productivity, which will delay the squirrel colonies. Every day those are delayed, with their trillions (or whatever) of squirrels, counts for a lot.
Caeli: Will we colonize the moon with squirrels, or Mars?
Elliot: No, human colonization will come first. That will help us get the raw materials and production plants needed for a truly massive squirrel colonizing effort later.
Caeli: When will we finally make a lot of squirrels?
Elliot: Basically, once it's easy.
Caeli: Should we at least do it slightly before then. Perhaps as soon as we could do it universe-wide, instead of when it's easy to do it universe-wide?
Elliot: That's a good question. But we need to be able to do it reliably. If we barely have enough resources, and we're stretched thin, then that's very risky. Something could go wrong. When it's easy is approximately when the risks will be gone.
Caeli: OK, that makes sense. We need to get this right. We wouldn't want all the colonies to die because we made a mistake.
Caeli: I guess reliability is very important. If our ultimate goal is lots of squirrels, we should do everything we can to make absolutely sure that that happens. So, what if people forget about the plan to colonize planets with squirrels? Or change their minds about the squirrel mission?
Elliot: That's a good question. And the solution is to have institutions in our society for error correction. What that means is we must have lots of criticism, all the time. In an environment heavy on criticism, bad ideas are refuted, so no rival theories will ever be able to challenge the squirrel theory. If the criticism ever went away then what would matter is things like how easy a theory is to remember, not whether it's true. So then squirrels might lose. Criticism is their best defense against false ideas.
Caeli: Won't there also be criticism of the squirrel theory.
Elliot: Yes. But so what? It's true, so it will survive the criticism. Whatever question you ask of it, it will have an answer. And all arguments will eventually lead people to squirrels.
Elliot: As a bonus, institutions of criticism like this have the happy property that if the squirrel theory is not right, or we've slightly misunderstood it, or whatever, then that error will be corrected.
Caeli: What if we just entrench the squirrel theory. We'll indoctrinate our kids with it. Everyone will be required to believe it. I know it's heavy handed, but squirrels are worth it. Won't that be even more reliable? People can be stupid and might not understand the squirrel theory's brilliance, even though we know it's true.
Elliot: That would not work reliably at all. People might start to question the indoctrination. Or they might be indoctrinated with something else. Institutions might change over time. Preventing that is very hard. Or our civilization might go extinct because it has a static culture and can't do science. Or we might just never reach the stars and make the colonies it dreams of: we aren't indoctrinated with how to accomplish our mission, and indoctrinated people don't think freely so might not invent the answers.
Elliot: The one and only advantage squirrel theory has over rival moralities is that it's true. That does not mean it indoctrinates people better. So the only reliable thing to do is play to our strength and use criticism and persuasion.
Caeli: That makes sense, and it also is a much nicer way to live. We get to think instead of be taught to mindlessly obey.
Elliot: Yes :)
Elliot: Another thing to consider is that institutions of criticism, which keep the squirrel theory prominent, are far more important than actually creating lots of squirrels. If we neglect the squirrel project, we'll be led back to it. People will argue that we aren't making enough squirrels, and we'll change our policies. But if we ever neglect our institutions of error correction and criticism, then no matter how many squirrels we already have, we might stop caring about them and throw away the project overnight.
Caeli: OK, I think I've got the idea now. So, what's the overall point?
Elliot: This way of thinking applies to more than squirrels. Take any pattern of atoms, and make the goal to spread it across the universe, and what we'll need to do is maximize human power first, and then when we're ready, spread it in a stable, reliable, risk-free way. (Note: for squirrels, the pattern is not a single squirrel, it's a habitat with many squirrels, oxygen, water, and food.)
Elliot: So for any goal like that, we should ignore the goal and focus on human power. We need to enable ourselves first. And we need to learn how to accomplish the goal, and avoid mistakes, so knowledge and error correction come in there. And we wouldn't want to start a campaign to ban space flight, or science: that'd be inconsistent with our goal.
Caeli: OK, I see how all goals like that are best accomplished with the four ingredients you mentioned earlier.
Elliot: Amusingly, the goal of minimizing the number of squirrels also has very similar steps to maximizing squirrels. We need human power to reliably keep squirrels extinct, and make sure aliens never create any, and make sure a terrorist doesn't build a squirrel, and make sure squirrels never evolve again somewhere. So we must monitor the whole universe vigilantly, and we must keep the eradication of squirrels alive in public debate. Everything is the same, except what we do at the end.
Elliot: So, if the basis of morality is squirrels, or bison, or crystals, and we think carefully enough about what to do, then what we'd end up with is almost exactly the same morality that people believe in today: we'd first value human happiness, freedom, science, progress, peace, wealth, and so on. The only difference would be one extra step, much later in time, where we'd fill most of the universe with squirrels or bison or whatever.
Caeli: That's interesting.
Elliot: So each theory of morality is partly the same, and partly different. The part that's different could be maximizing squirrels. It could be maximizing bison. It could be minimizing squirrels. That part is easily variable, which makes it a bad explanation. But the other parts, about knowledge creation, wealth, happiness, human power, and freedom are all constant. They seem more universal. They at least universally apply to moral theories that minimize or maximize things (which includes any sort of utilitarianism).
Caeli: So, it doesn't really matter what the basis for morality is?
Elliot: Exactly. And suppose we thought the basis for morality was squirrels, but we were wrong. This would not cause any significant problems. We'd end up doing the right thing for now, and learning of our mistake long before we actually filled the universe with squirrels.
Caeli: OK. I think I'm getting the idea. But can you clarify how things like science follow from our ingredients?
Elliot: Yes, certainly. Science helps increase our knowledge. And this understanding of reality helps us better avoid errors. Power to shape reality comes from knowledge, but also from having great tools, and having resources. So we want robots, computers, factories, brooms, freezers, toilets, and so on. How do we get these? Capitalism. Free trade.
Caeli: What about fair trade, communism, and so on?
Elliot: Some people think those are what we need. It doesn't really matter to my point. We need a good economic system, whichever one it is. And we already know how to argue about which is good. We have lots of professional economists, and philosophers, who know about this. And we have lots of good books about it.
Caeli: OK. So go on.
Elliot: Acting consistently, and avoiding self-defeating policies, is a matter of knowledge too. If we wanted, we could boil it all down to knowledge, and useful physical manifestations of that knowledge. But it's better to go the other way, and boil it up to freedom, science, and so on.
Caeli: Why is freedom important?
Elliot: Thinking freely means there aren't any good ideas that are being automatically ruled out. And it means being free to question any ideas we already have, so we can find errors in them. Living freely means being able to shape our part of the universe in the best ways for us. And when everyone agrees about freedom, there will be no wars, and no fighting. Everyone will work on their own goals, and no one will mind, and no one will want to control others. Even if they disagree.
Caeli: So let me try to summarize the structure of your argument.
Elliot: Go for it.
Caeli: There are certain ingredients that help us get what we want, whatever that may be. And it turns out that what specifically we want is not critical: the way to get it will be about the same regardless. In short, the ingredients are knowledge and human power. But they imply valuing science, freedom, wealth, and roughly the same things we value today.
Elliot: That's right. So there you go. A justification of the morality we already know, from simple principles.
Caeli: What about rationality? I don't believe you mentioned it.
Elliot: You're right. We've been looking at issues on a large scale. How individuals should make choices is an important matter too. Rationality is one of the things I advocate they strive for. I'll tell you about individual-sized morality tomorrow, OK?
Caeli: Yes. :) Bye.

Elliot Temple | Permalink | Messages (11)

Linux Not So Easy

linux claims to be easy to use like OS X. in my recent experience, it isn't. it's still a total pain just to install stuff like Rails. tutorials happily tell me where to get RPMs of a dozen things. When I try apt-get (package manager) it can't find all the dependencies it needs to even install ruby. A tutorial tells me to compile and install rubygems from source.

Even the linux-like application package managers for OS X work better than the originals. I never had any trouble with Darwin Ports.

I'm not saying linux is bad. It's a good thing and it's improving. I'm just saying OS X is easier and more friendly.

and don't tell me i did it wrong. that's beside the point. i used google for a while. i still didn't figure out a pleasant way to install a rails environment. that's way too hard.

Elliot Temple | Permalink | Messages (2)

What People Care About

Google for: "how to live" moral

And it will suggest: Did you mean: "how to give" oral

And the Sponsered Links advertising will be about oral sex.


Elliot Temple | Permalink | Messages (0)

Non-Invasive Education

This is absolutely a must read. It's about giving computer access to slum kids in India, with no training to use it. Note the parts about physics problems, MP3s, gender roles, his opinion of teachers, adults, that the kids have very little English comprehension, and the comments about "functional literacy".

The best part is that the mothers think this is good for their children. Just go read it.

Elliot Temple | Permalink | Messages (0)

Israel Lebanon Update

Israel Sets Goal of Pulling Troops Out of Lebanon by Sunday

I haven't read the news for a while. Last I heard, the UN was supposed to bring in 15,000 human shields (troops not allowed to shoot at terrorists, and positioned so that terrorists can shelter next to them, like they did in the Lebanon war) to protect Israel from Hezbollah. It looked like many of these soldiers would be from countries that don't recognize the state of Israel. Somehow that isn't insane enough to be laughed out of the UN. Meanwhile Israel was going to get nothing but a new government (because the current one messed up the war by taking a UN deal instead of killing more terrorists). Included in unkept promises, Israel was not going to get back the soldiers who were abducted to start the war, Hezbollah was not going to be disarmed, and Israel was not going to have security. Oh, and those UN peacekeepers would be sure to help rebuild things for terrorists. Like they did during the Lebanon war when they repaired roads that Israel had just bombed out of military necessity.

Now things are worse.
the United Nations forces, which are supposed to reinforce the Lebanese Army, were not up to strength. They number barely more than 5,000 now, only about 3,000 more than when the war ended and far short of the 15,000 called for under the resolution.

So Israel is about to leave, and the UN still hasn't really arrived. I'm actually not sure if the lack of UN presence is a good or bad thing.
last Friday ... Sheik Hassan Nasrallah, appeared in public for the first time to declare a divine victory.

That can't be good. He's no longer scared that we will kill him if he shows his face. We should kill him.
Israel also said it would continue aerial surveillance over Lebanon to prevent the resupply of weapons to Hezbollah until the resolution was fully carried out. That includes the release of two captured Israeli soldiers and the monitoring of the largely unmarked border between Lebanon and Syria, which helped to supply Hezbollah with sophisticated arms from Iran and Syria itself.

The United Nations forces say that such flights constitute Israeli violations of the cease-fire.

The UN thinks that trying to enforce the terms of the cease fire is a violation of the cease fire. No wonder they aren't enforcing the terms...
Since the cease-fire, 14 Lebanese have been killed and 90 injured by the bomblets, the United Nations said

Why don't we hear about dead Israeli kids who played with unexploded Hezbollah rockets? It's not because Hezbollah has high quality munitions that always explode. It's because Israeli parents are responsible enough to warn their kids about the danger.

Who is blamed for the irresponsibility of Lebanese people? The Jews, of course.
Israel captured the strategic plateau in the 1967 war and unilaterally annexed it in 1981. Some Israelis have suggested that given the threat from Iran, Israel should accept a Syrian offer for peace talks to try to wean Syria, a Sunni country, away from its alliance with Tehran.

Does the New York Times think this is subtle? When they want to give their opinion, they just say "some people said" and call it reporting. They even had the nerve to say this is the opinion of Israelis.

So what is their opinion? They want Israel to give away the Golan Heights to Syria so that terrorists can shoot off it to kill Jews. Why is this a good idea? Because it's appeasement, and if we learned anything from World War II, it's how well appeasement works. And if Israel doesn't understand this, it shows a lack of nuance.

So in conclusion, I'm glad I've been ignoring the news, and I will return to that policy for a while.

Elliot Temple | Permalink | Messages (0)

Mental Illness

Imagine that 4% of the population hears voices. Imagine further that 25% of those people are psychotic (note: this may not be the proper, technical usage of the term, but it doesn't matter to my point). No amount of explaining the situation seems to help them. The psychotic rate in the general population is thus 1%. The voices drive them crazy and make them completely dysfunctional.

Now, we know that hearing voices doesn't cause psychosis all by itself. 75% of the people who hear voices function perfectly fine. They ignore their voices, or make friends with them, or write books about them. As real as their voices seem, they are able to go on with life normally.

So, we can conclude that the psychotic people have a second thing wrong, which the non-psychotic people do not have. We'll call this disorder_2. disorder_2 may be a combination of many things, but it doesn't matter to my point. It's the set of whatever things are needed to make people with disorder_1 (hearing voices) psychotic. Note there could also be multiple separate options for what disorder_2: different ways to turn hearing voices into a problem. But this also isn't important right now.

So, what do we know about disorder_2? We know that it prevents people from understanding our advice about voices. They don't seem to listen when we tell them the voices aren't real. Or they can't figure out which voices we mean. But there's something a lot cooler than we know.

Assuming disorder_1 and disorder_2 are statistically independent, then the rate of disorder_2 in the population is an amazing 25%. So we have this thing, which is very common, and it makes people not listen to reason.

First we will consider that it might be a brain lesion. But if it is, why can't we tell people that? Why can't they step back, and know their brain is damaged, and not trust their own judgment and listen to us? Well, maybe they have a second brain lesion on the part of the brain which allows for that. But then, why can't they know *that*, and get some perspective, and figure out something to do that is rationally compatible with their brain malfunctioning? Why doesn't the person say he's really confused and just sit down, and not do anything, and ask for help, and get people to help him work out what is real? Well, maybe there is a brain lesion on the part of the brain needed for *that* too. But no matter how many brain lesions we postulate, there will always be a creative solution for how to continue rationally. Unless: there's one way there won't be a creative solution: due to all the brain lesions, the person isn't creative anymore. The person isn't a thinking human being anymore.

But most psychotic people aren't that far gone. Sure they're crazy, but they are also people. They still speak English, and do all sorts of things that a cow can't do. So let's imagine that disorder_2 is *not* a brain lesion.

What else might disorder_2 be? One possibility is that it's being irrational. Bear in mind that not all irrationality is the same. So imagine there is a specific type which is disorder_2. When someone with this form of irrationality hears voices, he can't be talked into reacting rationally, because he's not a rational person. This would explain all the data.

Now, let's consider how to treat these people. First consider treating disorder_2, assuming it is an irrationality. Suppose we have a set of arguments and explanations which cures this irrationality. This would be the best course, because we know the voices will then be harmless, and we know that being irrational will have other bad effects besides causing psychosis in people who hear voices.

Now imagine we have a drug which cures hearing voices. This would instantly cure psychosis in these people. But they would remain irrational. The cure would still be a very good thing. However, there is a danger. The person might be confirmed in his irrationality. If he believed that his only problem was disorder_1, the voices, he would wrong believe his worldview wasn't causing any problems, even though it was. If one of his friends told him that part of the problem was his irrationality, he could take his cure as proof that his irrationality was not the reason for his psychosis.

To avoid this danger, what would we need to do? It's pretty simple: we'd tell people that we are not curing their real problem. We are removing something from them which is completely harmless, but which reacts badly with their real disorder (this would be true whether the real disorder is irrationality or not). We might try an analogy to explain, like this one: they are like a man who gets angry a number of things including pillows, and we've removed all pillows from his house. Instantly, he is not angry when at home. But we haven't really cured him.

So the two primary conclusions we should take from this are:

1) irrationality may be a necessary component of many mental illnesses

2) many cures for mental illness, no matter how effective they seem to be, may be just like removing pillows. they may not be cures at all.

Elliot Temple | Permalink | Messages (0)

Ohio Is Backwards

http://bitingbeaver.blogspot.com/2006/09/morality-clauses-ec-and-broken-condoms.html

Know what the problem with hospitals in Ohio is? They wouldn't prescribe emergency contraception (EC) pills to this girl unless she was raped or married (the condom broke). She does not want a fourth kid. There is no help within a hundred miles.

At the hospitals that prescribe EC at all, they have "morality clauses" where the doctor interviews you and you have to meet certain criteria (married or raped). She's been completely unable to get EC. (And is now considering taking large quantities of other pills that might work, but she isn't sure if it's safe or effective.)

EC is over-the-counter now, not prescription, but not at a pharmacy within 100 miles for her. Her local pharmacy says they'll sell it next January.

Know what would solve this? Well, you may be thinking less religion. That would indeed work for this particular breed of insanity, but it would only avoid religious problems. There is a more universal solution: greed.

If people were more greedy, they'd sell her the damn pills to make a buck. No matter what crazy ideas they have, religious or otherwise, if they were greedy enough they would engage in free trade with anyone who isn't dangerous.

Elliot Temple | Permalink | Messages (0)

Announcement: Dialogs

I have started writing philosophical dialogs. The topics so far are mostly parenting, epistemology, and some political stuff like free trade and war. There are 12 so far. I'm updating them a lot more than my blog. You can find them at:

http://curi.us/dialogs/

They are currently in the order they were written, from top to bottom, but they will probably be reorganized later.

Elliot Temple | Permalink | Messages (0)