one meaning of the word “rational” is “has authority, is Right". even “True”, sometimes. this is a bad meaning, but a standard English meaning nonetheless.

basically ppl are justificationists and use “rational” to mean what’s good in that epistemology. and the basic point of justificationism is: if an idea doesn’t have authority, why would anyone (rationally) accept it? the "rationally" excludes reasons like arbitrarily due to taste, or chosen by self-interested bias.

what’s good in their epistemology is authority and being Right and having the Truth.

for this approach to reason, an idea can be rational. saying an idea is rational is claiming it is True, or at least has Authority.

a better approach is to see reason as a process. the point is to think about ideas in a way that can make progress. that means that if a mistake is made, the process can find out and correct it. error correcting approaches are rational, and ones which prevent error correction are irrational.

so, the standard idea most people have is: knowledge = JTB (justified, true belief)

meaning you should Believe ideas which are True, but you only get credit if you believe them due to authority (justification), NOT for any other reason (b/c if they don’t have authority but turn out true, why'd you believe it? that wasn't a rational belief, you didn't know it was true, you just got lucky that it happened to be true)

Elliot Temple | Permalink | Comments (0)

Opposite of Authority

i've been trying to figure out a good word that means roughly no-authority, but approaches the issue in a positive way, saying what the concept *is* rather than isn't, and not mentioning any opposing view. i don't think there is such a word! :(((

the word "freedom" isn't specific and clear enough. same for "liberal", "liberty", "reason" or "rational".

the words "humility" and "modesty" are too negative and have mistakes mixed in. their synonyms are worse.

"decentralized" doesn't get the point across well, it's too much of a special case.

"non-justificationist" would not be understood, nor would "fallibilist".

The word "independence", like some of the others, is too connected with politics not philosophy. also, like some of the others, lots of people who are pro-authority would be in favor of it and not see the contradiction

"cooperation" is too specific (not covering all the meaning) and people won't understand the connection

"no-authority" or "anti-authority" are both defining it in terms of what it's not. i want a word which means the right concept, rather than just denying an opposing view

some other potentially useful words are, "self-rule" (which sounds way too much like being your own authority), "self-determination", "autonomous"

btw if you look up the antonyms of "authority", they are terrible. they are words like weakness, powerlessness, inferiority, and disadvantage. that is what people think lack of authority is like. it tells you something about how badly they want authority, and see authority as a major goal and all around desirable thing. http://thesaurus.com/browse/authority

it's really sad, and says something about our culture, that there's no anti-authority word that is on par with the word "authority" for clarity, being well known and understood, applying broadly, and other things one would want from a word which make it easy to use effectively. like you can use "authority" for both political and epistemological discussions and people won't blink, they won't find it at all odd, weird, confusing or objectionable, it's just natural. but with a word like "independence", that's primarily a political word, and people will hesitate if they see it in a philosophy context.

one place I'd like to use this word is a slogan. something kinda like:
initiative, responsibility, criticism, persuasion, humility
the other words there are decent enough but "humility" is bad

another reason i want a no-authority word is for epistemology discussion. i've identified rejection of authority as perhaps the most important theme of the good epistemology, more than fallibilism, criticism, rationalism, objectivity, or existence (Rand considered "Existentialism" before "Objectivism", but it was taken).

i think the top two epistemology concepts are no-authority and error-correction. both are implied by fallibility but no one knows that. (nor do they know much about the relationship between the "critical" in "critical rationalism" and error-correction).

they also both imply each other. the rejection of authority entails needing to worry about error (as opposed to trusting authority to be right). and correcting errors requires considering the merits of ideas, rather than their amount of authority.

another word that would be nice is error-correction as one word. but that's not so important because error-correction is a descriptive phrase that positively identifies the right thing to do, without mentioning any opposing view.

another word I'd like is a positive word for non-TCS-coercion. "common preference finding" doesn't work well, it's too specific and is jargon.

another word I'd like is a positive word for non-force. there's "peace" but that's for groups not an individual level. there's "cooperation" but using that word does not communicate to people "no force allowed". people will consider relationships involving some force to be "cooperative". there's "voluntary" and "consensual" but they don't really capture it.

post suggestions in the comments

Elliot Temple | Permalink | Comments (9)

Philosophy Research Step-By-Step Guide

Want to do philosophy research? Want to get involved, contribute something and learn something? Here's how:


Elliot Temple | Permalink | Comments (0)

Philosophy Puzzle

It is not fitting for humans to take on parochial labor roles. Rather than divide labor, we must abolish it: labor is for machines. When we have universal machines construct according to our mind and thought, then each man will be an island, and there shall be no more division of labor.

I wrote this argument with references to four people. Who are they?

I won't say if guesses are correct unless you get all four exactly, or you explain the reasoning for your guess.

Bonus points if you can figure out what the argument is saying, analyze whether it's a good argument, and explain why I consider this puzzle worthwhile.

Elliot Temple | Permalink | Comments (0)

Philosophy Discussion Group

Want to learn more philosophy? Got questions? Got Ideas? Want criticism? Want to read philosophy discussions? There is now a single best place online to go: The Fallible Ideas discussion group.

Elliot Temple | Permalink | Comments (45)

Review: Influence: The Psychology of Persuasion by Robert Cialdini


this book doesn't use critical thinking. it takes some evidence and tells a story to explain it and doesn't tell us how it knows that story is correct and not some other story. it doesn't consider and criticize rival explanations. it just gives selective attention, over and over, to favored explanations. why are those explanations favored? it never says.

each influence tactic it discusses it acknowledged as fallible. but there's no extensive discussion about when and why they fail. the author doesn't seek explanations about what differentiates the successes and failures, instead he simply accepts and ignores the failure rate. the "scientists" involved in the field try to argue that X causes Y by doing an experiment where X happens and Y results, with a control with no X and no Y. they don't consider the Z and W that also differed between the test group and the control. but they also don't consider what A, B or C could be added to stop it from working anymore, or what background factors D, E, and F are required to be present for the X/Y relationship to work.

generally, the book is concerned with selective positive claims and not with error, correcting error, or considering everything.

the book is also sometimes wrong, incompetent or dishonest about technical details. in one case it described a drop from 38% to 10% as impressive. but it was in circumstances where we would have expected a 2/3 drop anyway. a 2/3 drop already gets us down to 12.66% so the observed drop could easily be within the margin of error, but that isn't mentioned. there was also a straightforward reason for a greater than 2/3 drop. yet the 38% to 10% drop was simply presented as a large drop scientifically proving the point – it was treated as evidence of the author's particular story about why there would be a drop in this case.

another issue is the book doesn't try to apply what it claims we're learning. it will raise some point, e.g. that people are biased in a particular way in a particular kind of situation. then it won't go looking for what other situations that applies to. for example it talked about fraternity hazing and how people try to act consistent with their sunk costs, so if they already went through tough hazing then they try to like the group they joined. hazing is an easy target but why not consider whether the same principle can be used to criticize some less well known target? for example people with PhDs is a group with high costs to enter it, so should people with PhDs watch out for bias regarding how much they think their PhD was worth the price and how nice life in the PhD recipient group is? the book consistently doesn't go the extra mile to consider anything controversial, it keeps just discussing psychological factors to make points that are already popular.

the book has other flaws too.

Elliot Temple | Permalink | Comments (0)

Physics Is Fun

This was originally posted 2003-09-16. It's not there anymore and I've decided to repost it. This story is 100% pure fiction. David had no involvement in creating or posting this.

This is a sex story. If you don't like sex, don't read it. It's also a comedy, so if you don't like comedies don't read it. Also, if any of the characters resemble real people, that's purely a coincidence, so don't sue me.

If you do like it, you might also want to read my Worst Romance Story Ever.

Physics Is Fun

by curi

Once upon a time, Elliot danced around merrily in a grassy meadow in England. He was merry because England is such a merry place. But it was a hot day and Elliot soon tired. He sat under an apple tree where it was shady. He took his shirt off for use as a headrest against the rough bark. Sweat glistened on his cut chest, and he flexed his muscles, enjoying the freedom that came with the removal of his constricting shirt. Elliot was in a pleasant place, so he sat and mused on matters philosophical.

Suddenly, an apple fell from the tree, and landed but two feet from Elliot. It broke on the ground and juice splattered onto him. But it had more effect than just making Elliot sticky. It set him thinking about gravity!

A few minutes later, Elliot set off at a run. He absolutely had to tell David his insight. Moments later, he burst into David's living room. "I've got it!" he announced.

David was wearing a Tie-Dye shirt that said "Death To Sociobiologists" in red and purple letters. Obviously he wasn't expecting any other visitors that day. His shorts were short, white, plain. Totally boring. On the upside, he'd really let lose with some thong...sandals.

David's chiseled abs made bulges in his shirt, and his painted toenails added a glimmer of fashion. He had a strong chin with a dimple, like Sparticus. And his eyes were dreamy brown, the sort to lose yourself in.

"Got what, my dear boy?" asked David, looking up from his computer.

Elliot's eyes met David's. Time seemed to slow. Usually Elliot was careful to keep his eyes downcast to avoid this awkwardness, but in his haste he had forgotten himself. Mmmm. Elliot tried not to drool.

As Elliot gazed into David's eyes, David took in the sight before him. And with a bare chest, it was some sight! David saw the bit of sticky brown apple juice. And he saw Elliot's fit figure. He saw Elliot's cute belly button, and his sexy haircut. And he saw Elliot's eyes staring into his own. David tried to focus himself.

Eventually, Elliot came to himself and stammered out some words. "I did it! I united quantum physics and general relativity into a new theory of quantum gravity!"

David was shocked to his senses. This was an impressive feat that Elliot had accomplished. Not just impressive. Earth-shattering. One of the largest breakthroughs in physics history. Or at least it would be, if Elliot was right...

As Elliot waited for David to respond, he tried to avoid eye contact, so he kept his gaze downcast. The result was that he stared directly at David's crotch. He noticed the outline of David's cock on the thin shorts, and how the leg of the shorts rode up on one side revealing extra scrumptious leg.

"That's amazing!" David finally exclaimed. "But this is no place to talk. We better make ourselves comfortable in a nice, fluffy bed."

Elliot followed David warily, but David soon explained. "Distractions reduce the creativity available for the problem at hand," David began, "because they themselves require the use of some creativity. It is best to avoid them when one wants to focus. In a bed, one maximises his comfort, and therefore his creativity. It's really the best way to think."

It was a queen-sized bed, fit for a queen. It was covered with pink sheets and a matching pink comforter. Pink was, after all, David's favorite colour. The two men grabbed feather pillows (with pink pillowcases) and laid down luxuriously. "This bed is so soft," said Elliot.

"Soft like your skin," replied David. Elliot thought for a bit and replied, "I'm not sure. It's hard to tell if my skin or the bed is softer." Elliot rubbed the sheets and his body by turns, trying to decide.

As Elliot rubbed his bare chest and arms, David mused, "You know what this reminds me of? This one time in high-school we were doing a friction experiment, to see if stone or plastic wheeled cars would go down a wooden ramp faster. It turned out the plastic wheels had less friction because they were smoother. But I guess you're not plastic, stone or wood, and neither are the sheets, so this doesn't reveal the answer to your query."

"Well, I can't decide," said Elliot. "But speaking of friction experiments, I know another one. If a dick enters an ass abruptly, there is heavy friction thus damaging the ass. But if you use lubrication, it doesn't hurt at all. Also, if the dick is sufficiently small, like in the case of Dan, that would lower the level of friction."

"You're right," said David, "that's a good point. The reason the entering abruptly has more friction, is that there are two sorts of friction, static and dynamic. No matter what speed you go, you'll get dynamic friction. But, if you overcome all the static friction at once, it will be one painful burst. If you overcome it slowly, spread out over time, the average friction level will be lower."

"Wow, I never thought of that," exclaimed Elliot. "I'm sure that new knowledge will come in handy next time I seek an anal common preference."

"Well, we didn't come here to discuss friction," David pointed out. "You were going to tell me your new theory of quantum gravity. I can't wait to hear it, I'm so stimulated."

And so it came to pass that Elliot and David lounged on fluffy pink pillows, and talked of physics. As they reclined on the queen-sized bed, they only faced one sort of distraction, so their creativity was very well nurtured. As beds are a great place for physics, and also a great place for the other sort of distraction, one couldn't help but compliment David on his ideal setting selection.

Words flowed naturally from the intellectuals, like pre-cum from an aroused cock. But after some progress, words began to flow quickly, in spurts of brilliance, just as cum spurts from an oragasming cock. Then Elliot and David reached an epiphany of understanding, and glowed happily, like the afterglow of a man who had sex.

"I think I got the thrust of your theory," said David, "but I could use some hands on learning. Got any ideas to help?"

"Let me demonstrate my theory with a quick skit," Elliot answered. "Your ass is a well known quantum gateway, so let me just apply some gravity to my dick, and penetrate the gateway, thus combining quantumness and gravity into a unified whole."

"You're such a genius," David announced. "My skits are always so dull. I'm glad you invented a more exciting version that really draws the audience into the action. It's a nice breakthrough in educational technique"

"I'll educate your ass!" shouted Elliot.

"Not if I educate yours first!" shouted David. "I'm a master educator."

Elliot and David both tried to move behind each other at once. But the laws of physics intervened, and they collapsed on the bed, thwarted.

"Wait a tick! A thought strikes me," said David. "You know in The Fabric Of Reality where I talk about Cantgotu environments? Well, we're trying to get to a position that we can't go to. But I think I have a solution."

As David explained his idea, Elliot's eyes lit up with glee. "Oh, your intellect makes me so hot," said Elliot, "keep talking physics to me!"

"And so you see," continued David, "using your new theory of quantum gravity for some extra pull, and relying on the high density of our engorged cocks, I believe we can just manage to collapse into a Cantgotu environment."

It came as a surprise to no one that David had spoken truly. So soon enough Elliot and David were fucking each other's asses at the same time! It was a true testament to the power of new theories of physics.

Elliot's hard cock filled David up, and David moaned with pleasure. He never knew it could be this good. And in turn, David's cock filled Elliot up. He too never knew it could be this good. They pounded each other roughly.

It wasn't long before their breathing came faster, and their pleasure heightened. There's a reason we can't go to Cantgotu environments: they're so intense it's hard to take! Soon, Elliot and David shot buckets of jizz into each other's asses.

After their heart and ass pounding sex, they lay in bed in each other's arms, in a dreamy state of bliss.

"I'm glad you put your knowledge of friction to good use," whispered David, "my rectum thanks you for it."

"My pleasure," answered Elliot. "I'll be happy to apply physics knowledge with you anytime."

Elliot Temple | Permalink | Comment (1)

The Reach of Physics and Epistemology

This was originally written in a March 2012 email:

Having a bit of knowledge about physics is important to most fields.

For example: tennis, chess, hockey, baseball, architecture, chemistry, biology, cooking, cleaning, building computers, building chairs, and so on.

The amount of physics knowledge needed for basic competence in this fields is small: the large majority of people in our culture have enough already.

You don't see people trying to heat their food in the freezer.

You don't see people losing tennis tournaments because they were confused about physics.

You don't see people doing chemistry experiments using only water and expecting each portion of water to transmute into the right chemicals because they want it to.

So, people take for granted having some understanding of physics as background knowledge. That knowledge still matters and it's still correct to say physics has a lot of reach even if people take it for granted.

If you get this basic physics stuff wrong, you can be really screwed. All sorts of stuff can go horribly wrong. Getting it right does matter a lot.

In general you don't need to know the details of quantum physics. That has less reach. It's quite important for some stuff like building nanometer-scale computer chips. But you don't need to know any quantum physics to win a tennis tournament or cook dinner or even to build a skyscraper.

To do basic science you do need to know some physics, but often not quantum physics, and often not any physics that goes beyond the background knowledge your average scientist will have and get right. If they messed up the physics they need, it could easily invalidate all their experiments in their field and make all their conclusions wrong, but in practice this rarely comes up because they don't get it wrong.

There are people who get a lot of basic physics wrong. We call them superstitious or gullible or stuff like that. It matters. But they are a minority. And a lot of the people watching science TV shows or getting fooled by bending spoons or talking about "crystal energy" or "dreamcatchers" aren't actually getting physics wrong, they are making different kinds of mistakes like they think it helps provide meaning for their life and they intentionally don't think about whether the physics is right or not.

Epistemology is a lot like physics in this regard. A relatively small amount of epistemology knowledge is relevant and important to pretty much every human endeavor. It matters to tennis, chess, hockey, baseball, architecture, and all the rest, same as with physics.

And our culture has some good quality knowledge of epistemology which people take for granted and routinely use.

But, contrary to physics, most have large mistakes in their basic epistemology background knowledge. There are widespread mistakes in our culture. And they don't just affect some special minorities that stand out, they affect 99%.

The result? All sorts of stuff goes wrong, and people don't know why or sometimes don't even know something went wrong.

People do lose tennis tournaments due to bad epistemology. That's actually common. Top people in all types of competition face significant psychological issues. They have to keep the right kind of mindset and focus to play their best. And what happens is they get to the finals and make a mistake. Then they make 5 more mistakes. Then, some people will set it aside and continue to play their best. But other people will get frustrated and have the wrong attitude to mistakes and let it "rattle" them, and will "lose focus" and start playing worse and making more mistakes they wouldn't normally make if they were relaxed in a low pressure situation, or wouldn't make if they weren't frustrated with previous mistakes.

Sometimes these problems dealing with mistakes decide a match. Better attitudes to mistakes and learning, and better understanding of their mind and emotions -- better philosophical knowledge -- could have won the match.

Sometimes players come back the next year, get in a similar situation, but then get past it and win this time. They thought hard about it and improved their epistemological knowledge (and some other knowledge too). Without knowing the name of the field. Without having the benefit of a lot of already-known and useful stuff in the field. They have to re-invent some stuff, and pick some up in bits and pieces from advice from their coach and sports/competition-related books and so on.

Some people never get past these mental issues and never become champions. That's common too. It's hard to reinvent enough epistemology and pick it up from scattered places. More people fail at this than succeed.

Epistemology comes up, and sometimes goes wrong, in all sorts of more mundane situations too. People get frustrated while playing a video game and throw the controller at the TV and break it, or just feel bad. People get stuck playing a video game and don't improve. People fight with their friends when playing a team video game and blame each other for letting the team down. Bad epistemology (in the background knowledge of our culture that people take for granted) contributes to these problems and good epistemology could address them.

And epistemology comes up, and goes wrong, when scientists start talking philosophy and trying to draw philosophical conclusions from their work.

Just like there are some places where physics reaches more (e.g. building GPS devices) and more advanced physics is important, there are also places where epistemology reaches more and more advanced epistemology is important.

Without physics well beyond the background knowledge in our culture, you're going to have a lot of problems building a GPS device. The background knowledge isn't even close to good enough.

And without more advanced knowledge of epistemology, you're going to design schools wrong. Education is an area where epistemology very heavily reaches. The background knowledge about epistemology in our culture is faulty, but the error rate with some of the "more advanced" knowledge (like explicit versions of induction, empiricism, justificationism and other stuff you can read in philosophy books) is a lot worse.

It's a bit like using superstition to build a GPS device. It's so wrong that you make a compete and utter mess of things.

That is, by the way, why our schools are "failing". (They don't even know what succeeding would be and are judging be the wrong criteria. But our schools do happen to be bad according to better criteria too. FYI US schools are far better than all the asian countries though.)

So there are various areas where epistemology is extra relevant. You don't just need a bit, you need lots. Everything has to do with learning, but some stuff more than others.

Epistemology heavy topics include: education (including lots of parenting stuff), morality, stuff to do with organizing knowledge (like programming or organizing a library), stuff to do with brains, stuff to do with how people or animals or computers or anything think or learn or create knowledge, stuff to do with evolution, stuff to do with ideas or types of ideas (like the distinctions people draw between emotions vs theories vs values vs guesses, etc), qualia, stuff to do with fallibility, errors, mistakes, sources of error, good explanations, judging explanations, methods of interpreting observations, scientific methods (b/c the point of science is to create knowledge, so the methods for doing that are methods of creating knowledge, methods in epistemology).

When scientists try to do science to address questions about how people think and live, and how that compares with animals, and the consequences for morality, they are straying especially heavily into epistemology in multiple ways and going far beyond what cultural background knowledge can be expected to handle (sort of handle, but actually fail a fair amount). When their epistemology is grossly false, they make multiple large mistakes per substantial idea in these areas, and so all their conclusions are crap.

Elliot Temple | Permalink | Comments (0)

People Mostly Hurt Themselves

The typical pattern of romantic relationships: people hurt themselves and blame their partners.

This evasion prevents most self-improvement.

Other areas of life are similar. When someone doesn't have the career or social role they want, they typically hurt themselves far more than anyone else hurts them.

Elliot Temple | Permalink | Comments (0)

no philosophy equals big risk

learning productivity multipliers (such as philosophy) ASAP is most efficient by far. i think that's something worth aggressively optimizing.

for example, some people prioritize their career ahead of philosophy. so then they do all kinds of career stuff which they could have done twice as fast if they were better at philosophy and a few other skills. like they have to learn some skills for work, and they learn them slowly, whereas if they knew more about learning they could have learned it a lot faster.

or people start dating and get married before learning about romance philosophy. big mistake.

besides philosophy first being a more efficient order (if are ever going to learn to learn faster, the sooner you do that the more efficient, since you get to use it in the most cases), it also helps deal with mistakes of various kinds (like marriage. marriage shouldn't be done at all at any speed).

how is one to know whether he's making a big picture mistake he'll regret later, without knowing lots of philosophy? i think it's a serious risk. this includes both risks of doing things badly out of order so it's really inefficient and also risks of doing something that shouldn't be done at all in any order.

so learn a substantial amount of philosophy ASAP or huge risk of disaster. those are the only choices.

put another way, you should start on the beginning of infinity track, now. that means thinking, learning, aiming for lots of speedy progress. you've gotta start making progress now, not at some indeterminate point in the future. and if you're trying to make rapid progress as your standard way of life, then to do it well you've gotta learn what's known about how to do that (which is called "philosophy").

ppl often seem to think the risks of doing their life as-is and meeting current preferences are low. they think that things seem to be going pretty well, how bad can it be? maybe they even know some philosophy and fixed some mistakes, and think there can't be too many more (uhh what? how do you know how many more there could be? we're all alike in our infinite ignorance!)

i think basically anything but doing quite a bit of philosophy is extremely risky. also i do and know more philosophy than you and i'm telling you it's risky. so why are you doubting me when you have no criticisms of my philosophical positions? and since you do way less philosophy, how would you even evaluate the risk? it takes philosophy to evaluate how much danger there is and to do anything about the danger. so how can you decide it's an ok risk to take when you lack the knowledge to even understand the risk?

Elliot Temple | Permalink | Comments (0)


Jews and Israel are good. Anti-semitism is bad, including when it pretends to be criticism of Israel.

As far as I know, none of the following people made a public pro-Jewish statement: David Deutsch, William Godwin, Edmund Burke, Thomas Szasz, Karl Popper and Ludwig von Mises.

They should have. That's why I wrote this post. I think it's important to be clear about this issue.

Ayn Rand did, see comments below. :)

EDIT: Clarified wording, 2014-07-01

Elliot Temple | Permalink | Comments (11)

Edmund Burke Anti-semitic Comment

Other revolutions [than the French one] have been conducted by persons who, whilst they attempted or affected changes in the commonwealth, sanctified their ambition by advancing the dignity of the people whose peace they troubled. They had long views. They aimed at the rule, not at the destruction of their country. They were men of great civil and great military talents, and if the terror, the ornament of their age. They were not like Jew brokers contending with each other who could best remedy with fraudulent circulation and depreciated paper the wretchedness and ruin brought on their country by their degenerate councils.
Edmund Burke. :(

Elliot Temple | Permalink | Comments (0)

Nietzsche the Anti-semite

The Siege: The Saga of Israel and Zionism by Conor Cruise O'Brien, pp 57-58:
... Nietzsche, through his work in replacing Christian (limited) anti-semitism with anti-Christian (unlimited) anti-semitism, played a large part in opening the way for the Nazis and the Holocaust.

I am well aware that that will seem to many people an extravagant, to some even outrageous, statement. The current[67] academic convention regarding Nietzsche is to treat Nazi admiration for this thinker as due to a misunderstanding. As far as anti-semitism is concerned, it can be shown that he condemned it, occasionally. Since the Second World War there has been a consensus for excluding him from the intellectual history of anti-semitism, in which, in fact, his role is decisive.

It is true that Nietzsche detested the vulgar (and Christian) anti-semitism of his own day, especially of his brother-in-law, Bernhard Foerster. It is also true that the main thrust of Nietzsche's writing was not directed against the Jews. It was directed against Christianity. But the way in which it was directed against Christianity made it far more dangerous to Jews than to Christians.

Anti-Christian anti-semitism in itself was nothing new. The most anti-Christian of the philosophes of the eighteenth century–Voltaire especially–were also anti-semitic, though not consistently so.[68] What was new in Nietzsche, however, was the ethical radicalism of his sustained onslaught on Christianity. The Enlightenment tradition, on the whole, had respected, and even to a great extent inculcated–throught its advocacy of tolerance–the Christian ethic, the Sermon on the Mount.

Nietzsch's message was that the Christian ethic was poison; its emphasis on mercy reversed the true Aryan values of fierceness; "pride, serverity, strength, hatred revenge." And the people responsible for this transvaluation of values (Umwertung des Wertes), the root of all evil, were the Jews.

In The Antichrist he writes about the Gospels:
One is among Jews–the first consideration to keep from losing the threat completely–Paul and Christ were little superlative Jews. ... One would no more associate with the first Christians than one would with Polish Jews–they both do not smell good. ... Pontius Pilate is the only figure in the New Testament who commands respect. To take a Jewish affair seriously–he does not persuade himself to do that. One Jews more or less–what does it matter?
Nietzsche's real complaint against the vulgar Christian anti-semites of his day was that they were not anti-semitic enough; that they did not realize that they were themselves carriers of that semitic infection, Christianity.[69] "The Jews," he wrote in The Antichrist, "have made mankind so thoroughly false that even today the Christian can feelanti-Jewish without realizing that he is himself the ultimate Jewish consequence."
I think this is convincing. And important. Does anyone disagree or know more about it?

Elliot Temple | Permalink | Comments (0)

Epistemology In Short

I got asked for my philosophy on one foot. I personally never found Objectivism on one foot that useful. I thought it's too hard to understand if you don't already know what the stuff means. Philosophy is hard enough to communicate in whole books. Some people read Atlas Shrugged and think Rand is a communist or altruist. Some people read Popper and think he's a positivist or inductivist. Huge mistakes are easily possible even with long philosophical statements. I think the best solution involves back and forth communication so that miscommunication mistakes can be fixed along the way and understanding can be built up incrementally. But this requires the right attitudes and methods for talking to be very effective. And that's hard. And if people don't already have the right methods to learn and communicate well, how do you explain it to them? There's a chicken and egg problem that I don't have a great answer to. But anyway, philosophy, really short, I tried, here you go:

There is only one known rational theory of how knowledge is created: evolution. It answers Paley's problem. No one has ever come up with any other answer. Yet most people do not recognize evolution as a key theory in epistemology, and do not recognize that learning is an evolutionary process. They have no refutation of evolution, nor any alternative, and persist with false epistemologies. This includes Objectivism – Ayn Rand choose not to learn much about evolution.

Evolution is about how knowledge can be created from non-knowledge, and also how knowledge is improved. This works by a process of replication with variation and selection. In epistemology, ideas and variants are criticized and the survivors continue on in the process. This process incrementally makes progress, just like biological evolution. Step by step, flaws get eliminated and the knowledge gets better adapted and refined. This correction of errors is crucial to how knowledge is created and improved.

Another advantage of evolutionary processes is that they are resilient to mistakes. Many individual steps can be done badly and a good result still achieved. Biological evolution works even though many animals with advantageous genes die before other animals with inferior genes; there's a large random luck factor which does not ruin the process. This is important because of human fallibility: mistakes are common. We cannot avoid making any mistakes and should instead emphasize using methods that can deal with mistakes well. (Methods which deal with mistakes well are rational; methods which do not are irrational because they entrench mistakes long term.)

A key issue in epistemology is how conflicts of ideas are handled. Trying to resolve these conflicts by authority or by looking at the source of ideas is irrational. It can make mistakes persist long term. A rational approach which can quickly catch and eliminate mistakes is to judge conflicting ideas by their content. How do you judge the content of an idea? You try to find something wrong with it. You should not focus on saying why ideas are good because if they have mistakes you won't find the mistakes that way. However, finding something good about an idea is useful for criticizing other ideas which lack that good feature – it reveals a flaw in those rivals. However, in cases where a good feature of an idea does not lead to any criticism of a rival, it provides no advantage over that rival. This critical approach to evaluating ideas follows the evolutionary method.

This has implications for morality and politics. How people handle conflicts and disagreements are defining issues for their morality and politics. Conflicts of ideas should not be approached by authority and disagreement should not be disregarded. This implies a voluntary system with consent as a major issue. Consent implies agreement; lack of consent implies disagreement. Voluntary action implies agreement; involuntary action implies disagreement.

Political philosophy usually focuses too much on who should rule (or which laws should rule), instead of how to incrementally evolve our political knowledge. It tries to set up the right laws in the first place, instead of a system that is good at improving its laws. Mistakes should be expected. Disagreement should be expected. Everything should be set up to deal with this well. That implies making it easy to change rulers and laws (without violence). Also disagreement and diversity should be tolerated within the law.

Moral philosophy usually makes the same mistake as political philosophy. It focuses too much on deciding-declaring what is moral and immoral. There should be more concern with fallibility, and setting things up for moral knowledge to incrementally evolve. We aren't going to get all the answers right today. We should judge moral ideas more by how much they allow evolution, progress and mistake-correction, rather than by trying to know whether a particular idea would be ideal forever. Don't try to prophesy the future and do start setting things up so we can adjust well in the unknown future.

Things will go wrong in epistemology, morality and politics. The focus should be on incrementally evolving things to be better over time and setting things up to be resilient to mistakes. It's better to have mistaken ideas today and good mistake-correction setup than to have superior ideas today which are hard to evolve and fragile to error.

Elliot Temple | Permalink | Comments (0)

Rationally Resolving Conflicts of Ideas

I was planning to write an essay explaining the method of rationally resolving conflicts and always acting on a single idea with no outstanding criticisms. It would followup on my essay Epistemology Without Weights and the Mistake Objectivism and Critical Rationalism Both Made where I mentioned the method but didn't explain it.

I knew I'd already written a number of explanations on the topic, so I decided to reread them for preparation. While reading them I decided that the topic is hard and it'd be very hard to write a single essay which is good enough for someone to understand it. Maybe if they already had a lot of relevant background knowledge, like knowing Popper, Deutsch or TCS, one essay could work OK. But for an Objectivist audience, or most audiences, I think it'd be really hard.

So I had a different idea I think will work better: gather together multiple essays. This lets people learn about the subject from a bunch of different angles. I think this way will be the most helpful to someone who is interested in understanding this philosophy.

Each link below was chosen selectively. I reread all of them as well as other things that I decided not to include. It may look like a lot, but I don't think you should expect an important new idea in epistemology to be really easy and short to learn. I've put the links in the order I recommend reading them, and included some explanations below.

Instead of one perfect essay – which is impossible – I present instead some variations on a theme.

Popper's critical preferences idea is incorrect. It's similar to standard epistemology, but better, but still shares some incorrectness with rival epistemologies. My criticisms of it can be made of any other standard epistemology (including Objectivism) with minor modifications. I explained a related criticism of Objectivism in my prior essay.

Critical Preferences
Critical Preferences and Strong Arguments

The next one helps clarify a relevant epistemology point:


Regress problems are a major issue in epistemology. Understanding the method of rationally resolving conflicts between ideas to get a single idea with no outstanding criticism helps deal with regresses.

Regress Problems

Confused about anything? Maybe these summary pieces will help:

Conflict, Criticism, Learning, Reason
All Problems are Soluble
We Can Always Act on Non-Criticized Ideas

This next piece clarifies an important point:

Criticism is Contextual

Coercion is an important idea to understand. It comes from Taking Children Seriously (TCS), the Popperian educational and parenting philosophy by David Deutsch. TCS's concept of "coercion" is somewhat different than the dictionary, keep in mind that it's our own terminology. TCS also has a concept of a "common preference" (CP). A CP is any way of resolving a problem between people which they all prefer. It is not a compromise; it's only a CP if everyone fully prefers it. The idea of a CP is that it's a preference which everyone shares in common, rather than disagreeing.

CPs are the only way to solve problems. And any non-coercive solution is a CP. CPs turn out to be equivalent to non-coercion. One of my innovations is to understand that these concept can be extended. It's not just about conflicts between people. It's really about conflicts between ideas, including ideas within the same mind. Thus coercion and CPs are both major ideas in epistemology.

TCS's "most distinctive feature is the idea that it is both possible and desirable to bring up children entirely without doing things to them against their will, or making them do things against their will, and that they are entitled to the same rights, respect and control over their lives as adults." In other words, achieving common preferences, rather than coercion, is possible and desirable.

Don't understand what I'm talking about? Don't worry. Explanations follow:

Taking Children Seriously

The next essay explains the method of creating a single idea with no outstanding criticisms to solve problems and how that is always possible and avoids coercion.

Avoiding Coercion
Avoiding Coercion Clarification

This email clarifies some important points about two different types of problems (I call them "human" and "abstract"). It also provides some historical context by commenting on a 2001 David Deutsch email.

Human Problems and Abstract Problems

The next two help clarify a couple things:

Multiple Incompatible Unrefuted Conjectures
Handling Information Overload

Now that you know what coercion is, here's an early explanation of the topic:

Coercion and Critical Preferences

This is an earlier piece covering some of the same ideas in a different way:

Resolving Conflicts of Interest

These pieces have some general introductory overview about how I approach philosophy. They will help put things in context:

Philosophy: What For?

Want to understand more?

Read these essays and dialogs. Read Fallible Ideas. Join my discussion group and actually ask questions.

Elliot Temple | Permalink | Comments (0)

Regress Problems

Written April 2011 for the beginning of infinity email list:

Infinite regresses are nasty problems for epistemologies.

All justificationist epistemologies have an infinite regress.

That means they are false. They don't work. End of story.

There's options of course. Don't want a regress? No problem. Have an arbitrary foundation. Have an unjustified proposition. Have a circular argument. Or have something else even sillier.

The regress goes like this, and the details of the justification don't matter.

If you want to justify a theory, T0, you have to justify it with another theory, T1. Then T1 needs justifying by T2. Which needs justifying by T3. Forever. And if T25 turns out wrong, then T24 loses it's justification. And with T24 unjustified, T23 loses its justification. And it cascades all the way back to the start.

I'll give one more example. Consider probabilistic justification. You assign T0 a probability, say 99.999%. Never mind how or why, the probability people aren't big on explanations like that. Just do your best. It doesn't matter. Moving on, what we should wonder is if that 99.999% figure is correct. If it's not correct then it could be anything such as 90% or 1% or whatever. So it better be correct. So we better justify that it's a good theory. How? Simple. We'll use our whim to assign the probability estimate itself a probability of 99.99999%. OK! Now we're getting somewhere. I put a lot of 9s so we're almost certain to be correct! Except, what if I had that figure wrong? If it's wrong it could be anything such as 2% or 0.0001%. Uh oh. I better justify my second probability estimate. How? Well we're trying to defend this probabilistic justification method. Let's not give up yet and do something totally differently, instead we'll give it another probability. How about 80%? OK! Next I ask: is that 80% figure correct? If it's not correct, the probability could be anything, such as 5%. So we better justify it. So it goes on and on forever. Now there's two problems. First it goes forever, and you can't ever stop, you've got an infinite regress. Second, suppose you stopped have some very large but finite number of steps. Then the probability the first theory is correct is arbitrarily small. Because remember that at each step we didn't even have a guarantee, only a high probability. And if you roll the dice a lot of times, even with very good odds, eventually you lose. And you only have to lose once for the whole thing to fail.

OK so regresses are a nasty problem. They totally ruin all justificationist epistemologies. That's basically every epistemology anyone cares about except skepticism and Popperian epistemology. And forget about skepticism, that's more of an anti-epistemology than an epistemology: skepticism consists of giving up on knowledge.

So how could Popperian epistemology deal with regresses?

(I've improved on Popper some.)

Regresses all go away if we drop justification. Don't justify anything, ever. Simple.

But justification had a purpose.

The purpose of justification is to sort out good ideas from bad ideas. How do we know which ideas are any good? Which should we believe are true? Which should we act on?

BTW that's the same general problem that induction was trying to address. And induction is false. So that's another reason we need a solution to this issue.

The method of addressing this issue has several steps, so try to follow along.

Step 1) You can suggest any ideas you want. There's no rules, just anything you have the slightest suspicion might be useful. The source of the ideas, and the method of coming up with them, doesn't matter to anything. This part is easy.

Step 2) You can criticize any idea you want. There's no rules again. If you don't understand it, that's a criticism -- it should have been easier to understand. If you find it confusing, that's a criticism -- it should have been clearer. If you think you see something wrong with it, that's a criticism -- it shouldn't have been wrong it that way, *or* it should have included an explanation so you wouldn't make a mistaken criticism. This step is easy too.

Step 3) All criticized ideas are rejected. They're flawed. They're not good enough. Let's do better. This is easy too. Only the *exact* ideas criticized are rejected. Any idea with at least one difference is deemed a new idea. It's OK to suggest new ideas which are similar to old ideas (in fact it's a good idea: when you find something wrong with an idea you should try to work out a way to change it so it won't have that flaw anymore).

Step 4) If we have exactly one idea remaining to address some problem or question, and no one wants to revisit the previous steps at this time, then we're done for now (you can always change your mind and go back to the previous steps later if you want to). Use that idea. Why? Because it's the only one. It has no rivals, no known alternatives. It stands alone as the only non-refuted idea. We have sorted out the good ideas from the bad -- as best we know how -- and come to a definite answer, so use that answer. This step is easy too!

Step 5) What if we have a different number of ideas left over which is not exactly one? We'll divide that into two cases:

Case 1) What if we have two or more ideas? This one is easy. There is a particular criticism you can use to refute all the remaining theories. It's the same every time so there's not much to remember. It goes like this: idea A ought to tell me why B and C and D are wrong. If it doesn't, it could be better! So that's a flaw. Bye bye A. On to idea B: if B is so great, why hasn't it explained to me what's wrong with A, C and D? Sorry B, you didn't answer all my questions, you're not good enough. Then we come to idea C and we complain that it should have been more help and it wasn't. And D is gone too since it didn't settle the matter either. And that's it. Each idea should have settled the matter by giving us criticisms of all its rivals. They didn't. So they lose. So whenever there is a stalemate or a tie with two or more ideas then they all fail.

Case 2) What if we have zero ideas? This is crucial because case one always turns into this! The answer comes in two main parts. The first part is: think of more ideas. I know, I know, that sounds hard. What if you get stuck? But the second part makes it easier. And you can use the second part over and over and it keeps making it easier every time. So you just use the second part until it's easy enough, then you think of more ideas when you can. And that's all there is to it.

OK so the second part is this: be less ambitious. You might worry: but what about advanced science with its cutting edge breakthroughs? Well, this part is optional. If you can wait for an answer, don't do it. If there's no hurry, then work on the other steps more. Make more guesses and think of more criticisms and thus learn more and improve your knowledge. It might not be easy, but hey, the problem we were looking at is how to sort out good ideas from bad ideas. If you want to solve hard problems then it's not easy. Sorry. But you've got a method, just keep at it.

But if you have a decision to make then you need an answer now so you can make your decision. So in that case, if you actually want to reach a state of having exactly one theory which you can use now, then the trick is when you get stuck be less ambitious. How would that work in general terms? Basically if human knowledge isn't good enough to give you an answer of a certain quality right now, then your choices are either to work on it more and not have an answer now, or accept a lower quality answer. You can see why there isn't really any way around that. There's no magic way to always get a top quality answer now. If you want a cure for cancer, well I can't tell you how to come up with one in the next five minutes, sorry.

This is a bit vague so far. How does lowering your standards address the problem? So what you do is propose a new idea like this, "I need to do something, so I will do..." and then you put whatever you want (idea A, idea B, some combination, whatever else).

This new idea is not refuted by any of the existing criticisms. So now you have one idea, it isn't refuted, and you might be done. If you're happy with it, great. But you might not be. Maybe you see something wrong with it, or you have another proposal. That's fine; just go back to the first three steps and do them more. Then you'll get to step 4 or 5 again.

What if we get back here? What do we do the second time? The third time? We simply get less ambitious each time. The harder a time we're having, the less we should expect. And so we can start criticizing any ideas that aim too high (while under too much time pressure to aim that high).

BTW it's explained on my website here, including an example:


Read that essay, keeping in mind what what I've been saying, and hopefully everything will click. Just bear in mind that when it talks about cooperation between people, and disagreements between people, and coming up with solutions for people -- when it discusses ideas in two or more separate minds -- everything applies exactly the same if the two or more conflicting ideas are all in the same mind.

What if you get really stuck? Well why not do the first thing that pops into your head? You don't want to? Why not? Got a criticism of it? It's better than nothing, right? No? If it's not better than nothing, do nothing! You think it's silly or dumb? Well so what? If it's the best idea you have then it doesn't matter if it's dumb. You can't magically instantly become super smart. You have to use your best idea even if you'd like to have better ideas.

Now you may be wondering whether this approach is truth-seeking. It is, but it doesn't always find the truth immediately. If you want a resolution to a question immediately then its quality cannot exceed today's knowledge (plus whatever you can learn in the time allotted). It can't do better than the best that is known how to do. But as far as long term progress, the truth seeking came in those first three steps. You come up with ideas. You criticize those ideas. Thereby you eliminate flaws. Every time you find a mistake and point it out you are making progress towards the truth; you're learning. That's how we approach the truth: not by justifying but by identifying mistakes and learning better. This is evolution, it's the solution to Paley's problem, it's discussed in BoI and on my Fallible Ideas website. And it's not too hard to understand: improve stuff, keep at it, and you make progress. Mistake correcting -- criticism -- is a truth-seeking method. That's where the truth-seeking comes from.

Elliot Temple | Permalink | Comments (0)

Coercion and Critical Preferences

Written Aug 2008, addressing ideas of Karl Popper and David Miller about critical preferences:

You should eliminate rival theories until you have exactly one candidate theory and then act on that. We thus dodge any issues of comparing two still-standing theories using some sort of criterion.

And that's it. The problem is solved. When there is only one candidate theory the solution is dead easy. But this solution raises a new problem which is how to deal with all the rival theories (in short order).

If you act on a theory while there are any active rivals that is coercion, which is the cause of distress. You are, roughly, forsaking a part of yourself (without having dealt with it in a rational fashion first).

Often we don't know how to resolve a controversy between two theories promptly, perhaps not even in our lifetime. But that does not mean we are doomed to any coercion. We can adopt a *single theory* with *no active rivals* which says "I don't know whether A or B is better yet, but I do know that I need to choose what to do now, and thus I will do C for reason D." A and B talk about, say, chemistry, and don't contain the means to argue with this new theory proposed -- they don't address the issue now being considered of what to do given the unsettled dispute between A and B -- so are not relevant rival theories, so we end up with only one theory about what to do, this new one we just invented. And acting on this new theory clearly does not forsake A or B; it's not in conflict with them.

We might invent two new theories, one siding more with A, and one more with B, and thus have a new conflict to deal with. But then we have *new problem* which does not depend on resolving the dispute between A and B. And we can do as many layers of reinterpretations and meta-theorizing like this as we want. Coercion is avoidable and practical problems of action are soluble promptly.

If it really comes down to it, just formulate one new theory "I better just do C for reason D *and* all arguments to the contrary are nothing but attempts to sabotage my life because I only have 3 seconds left to act." That ought to put a damper on rival theories popping up -- it'd now have to include a reason it's not sabotage.

One could still think of a rival theory which says it wants to do E because of B and this isn't sabotage b/c the A-based C will be disastrous and it's trying to help" or whatever. There is no mechanical strategy for making choices or avoiding coercion. What I mean to illustrate is we have plenty of powerful tools at our disposal. This process can go wrong, but there is plenty of hope and possibility for it to go right.


BTW this does not only apply to resolving rival theories *for purpose of action* or *within oneself*. It also works for abstract theoretical disputes between different people.

Suppose I believe X and you believe Y, *and we are rational*. Think of something like MWI vs copenhagen -- theories on that scale -- except something that we don't already know the answer to.

So we argue for a while, and it's clear that you can't answer some of my questions and criticisms, and I can't answer some of yours.

Most people would say "you haven't proven me wrong, and i see problems with your theory, so i am gonna stick with mine". That's called bias.

Some people might be tempted to analyze, objectively, which theory has more unanswered questions (weighted by importance of the question), and which theory has troubled with how many criticisms (weighted by amount of trouble and important of criticism). And thus they'd try to figure out which theory is "better" (which doesn't imply it's true, or even that the other is false -- well of course strictly they are both false, but the truth might be a minor modification of the worse theory).

What I think they've done there is abandon the critical rationalist process and replace it with a misguided attempt to measure which theories are good.

What we should do is propose *new theories* like "the current state of the debate leaves open the question of X and Y, but we should be able to all agree on Z so for issue A we should do B, and for issue C we should do D, and that's something we can all agree on. We can further agree about what research ought to be done and is important to do to resolve questions about both X and Y." Thus we can make a new *single theory* that *everyone on both sides can agree to* which does not forsake X or Y. This is the *one rational view to take of the field*, unlike the traditional approach of people being in different and incommensurable camps. This view will leave them with *nothing to argue about* and *no disagreement*.

Of course, someone might say it's mistaken and propose another view of the same type. And so we could have an argument about that. But this new argument does not depend on your view in the X vs Y dispute. It's a new problem. Just like above. And if it gets stuck, we can make another meta-theory.

This approach I advocate follows the critical rationalist process through and through. It depends on constructing new theories (which are just guesses and may go wrong) and criticizing them. It never resorts to static criteria of betterness.

Elliot Temple | Permalink | Comments (0)

Criticism is Contextual

Criticism is contextual. The "same idea" as far as explicit content and the words it's expressed in, can have a different status (criticized, or not) depending on the context, because the idea might successfully solve one problem but not solve some other problem. So it's criticized in its capacity to solve that second problem, but not the first. This kind of thing is ubiquitous.

Example: you want to drive somewhere using a GPS navigator system. The routes your GPS navigator system gives you are often not the shortest, best routes. You have a criticism of the GPS navigator system.

Today, you want to drive somewhere. Should you use your GPS? Sure! Why not? Yes you had a criticism of it, but that criticism was contextual. You criticized it in the context of wanting shortest routes. But it's not criticized in the context of whether it will do a good job of getting you to today's destination (compared to the alternatives available). It could easily lead you to drive an extra hundred meters (maybe that's 0.1% more distance!) and still get you to your destination on time. Despite being flawed in some contexts, by some criteria, it could still get you there faster and easier than you would have done with manual navigation.

That there is a criticism of the GPS navigator system does not mean never to use it. Nor does it mean if you use it you're acting on refuted or criticized ideas. Criticism is contextual.

Elliot Temple | Permalink | Comments (0)

Rational People

I wrote this in Dec 2008.

Rational people are systems of ideas that can temporarily remove any one idea in the system without losing identity. We can remain functional without any one idea. This means we can update or replace it. And in fact we can often change a lot of ideas at once (how many depends in part on which).

To criticize one idea is not to criticize my rationality, or my ability to create knowledge, or my ability to make progress. It doesn't criticize what makes me human, nor anything permanent about me. So I have no reason to mind it. Either I will decide it is correct, and change (and if I don't understand how to change, then no one has reason to fault me for not changing yet), or decide it is incorrect and learn something from considering it.

The way ideas die in our place is that we change ourselves, while retaining our identity (i.e., we don't die), but the idea gets abandoned and does die.

Elliot Temple | Permalink | Comments (0)

Avoiding Coercion Clarification

In reply to my Avoiding Coercion essay, I got a question about the need for creativity and thinking of good questions as part of the process. In March 2012, for the beginning of infinity list, I clarified:

There is no simple method of creating questions. Questioning stuff is creative thinking, like criticizing, and so on.

The point of this method is that it's so trivially easy. Anything that relies on thinking of good questions is not reliable. We don't reliably have good ideas. We don't reliably notice criticisms of ideas we consider. We don't always reach a conclusion for some tricky issue for days, years, or ever.

If avoiding coercion required creative questions, imaginative critical thinking, and so on -- if a bunch of success at all that stuff was required -- then it wouldn't be available to everyone, all the time. It would fail sometimes. There's no way to always do a great job at those things, always get them right.

But one of the big important things is: we can always avoid coercion. It's not like always figuring out the answer to every math problem put in front of us. Every math problem we're given is soluble, and given infinite time/energy/attention/learning/etc we could figure it out. But the problems which threaten coercion aren't like that. They don't require infinite time/resources. We can deal with all of them, now, reliably, enough not to be coerced.

That is what the article is about. What is this simple method that we can do reliably without making a big creative effort.

Now the method doesn't 100% ignore creativity. You can use creative thought as part of it, and you should. But even if you do a really bad job of that, you reach success anyway (where success means non-coercion). If you have an off day solving math problems, maybe you don't solve any today. But an off-day for creative thinking about your potentially coercive problems need not lead to any coercion.

The method doesn't require you to be thinking of good questions as you go along. If you do, great. If you don't, it works anyway. Which is necessary to reach the conclusion: all family coercion is avoidable, in practice, today, without being superman.

(I weakened the claim from dealing with all coercion because I don't want to comment on the situation where you get abducted by terrorists at gunpoint or other situations with force in them. That's hard. But all regular family situations are very manageable.)

Elliot Temple | Permalink | Comments (0)