What People Need Excuses For

What issues people feel they need an excuse for, instead of just openly taking a position, is important. It says a lot about what they consider fully legitimate and what they consider more questionable.

For example, no one needs any kind of excuse to be pro-freedom, pro-science, or pro-education. People take those positions proudly.

But people do need excuses to mistreat people they label "mentally ill". They don't just say, "I don't like him, so let's use force against him!" Psychiatry makes up a bunch of excuses about medical science to help legitimize their otherwise-illegitimate actions.

Global warming is also an excuse. The greens don't just proudly say that factories and electricity are bad. They say we're forced to cut back on industrial civilization or else the world will be destroyed. They feel they need a really strong, compelling excuse for opposing material wealth and technology.

The political left doesn't want to admit they are anti-liberal. They feel they need excuses for being anti-liberal. Their favorite excuse is to lie and say they are "liberal" and "progressive". And many people claim capitalism is pretty good (they find it hard to proudly and fully oppose capitalism), but they use excuses about "excesses" and "public goods" to legitimize a mixed economy.

People often feel the need to have an excuse for shutting down discussion and being closed-minded. They don't just say, "I am opposed to critical discussion with people who have different views than I do." Instead they make excuses about how they'd love to discuss but they're too busy, or the other person is ruining the discussion by being too unreasonable.


Elliot Temple | Permalink | Messages (0)

Aristotle (and Peikoff and Popper)

I just listened to Peikoff's lectures on Aristotle. I also reread Popper's WoP introduction about Aristotle. some thoughts:

http://www.peikoff.com/courses_and_lectures/the-history-of-philosophy-volume-1-–-founders-of-western-philosophy-thales-to-hume/

btw notice what's missing from the lecture descriptions: Parmenides and Xenophanes.

this is mostly Peikoff summary until i indicate otherwise later.

Aristotle is a mixed thinker. some great stuff and some bad stuff.

Part of the mix is because it's ancient philosophy. They didn't have modern science and some other advantages back then. It's early thinking. So Aristotle is kinda confused about God and his four causes. It was less clear back then what is magical thinking and what's rational-scientific thinking.

Aristotle is bad on moderation. He thought (not his original idea) that the truth is often found between two extremes.

Aristotle invented syllogism and formal logic. this is a great achievement. very worthwhile. it has a bad side to it which is causing problems today, but i don't blame Aristotle for that. it was a good contribution, a good idea, and it's not his fault that people still haven't fixed some of its flaws. actually it's really impressive he had some great ideas and the flaws are so subtle they are still fooling people today. i'll talk about the bad side later.

it's called formal logic because you can evaluate it based on the form. like:

All M are P.
S is an M.
Therefore, S is P.

this argument works even if you don't know what M, P and S are. (they stand for middle, predicate and subject.) (the classical example is M=man/men, P=mortal, S=Socrates.) Aristotle figured out the types of syllogism (there's 256. wikipedia says only 24 of them are valid though.)

Aristotle was apparently good on some biology and other science stuff but i don't really know anything about that.

Aristotle started out as a student of Plato but ending up rejecting many of Plato's ideas.

Aristotle didn't say a ton about politics. What he said is mixed. Better than Plato.

Aristotle – like the Greeks in general (as opposed to e.g. pre-modern Christians) – cared about human happiness and life on Earth. and he thought morality was related to human happiness, success, effectiveness, etc. (as opposed to duty moralities from e.g. early Christians and Kant which say morality means doing your duty and this is separate from what makes you happy or makes your life good.)

Aristotle advocated looking at the world, empirical science. he invented induction.

Aristotle was confused about infinity. (Peikoff and some other Objectivists today like Harry Binswanger roughly agree with Aristotle's infinity mistakes.)

Aristotle was generally pro-human and pro-reason. in a later lecture Peikoff says the dark ages were fixed because European Christendom got some copies of Aristotle's writing from the Muslims and Jews (who were trying to reconcile him with their religions) and then Thomas Aquinas attempted to reconcile Aristotle with Christianity and this made it allowable for Christians to read and think about Aristotle which is what got progress going again.


now Popper's perspective, which Peikoff basically agrees with most of the facts about, but evaluates differently.

Popper agrees Aristotle did some great stuff and got a few things wrong. like Peikoff and a ton of other people. But there's a major thing Popper doesn't like. (BTW William Godwin mentioned disliking Aristotle and Plato but didn't say why.)

Aristotle wanted to say I HAVE KNOWLEDGE. this is good as a rejection of skepticism, but bad as a rejection of fallibility. Aristotle and his followers, including Peikoff, equivocate on this distinction.

Part of the purpose of formal logic is an attempt to achieve CERTAINTY – aka infallibility. that's bad and is a problem today.

Objectivism says it uses the word "certain" to refer to fallible knowledge (which they call non-omniscient knowledge. Objectivism says omniscience is impossible and isn't the proper standard of something qualify as knowledge). and Ayn Rand personally may have been OK about this (despite the bad terminology decision). but more or less all other (non-Popperian) Objectivists equivocate about it.

this confusion traces back to Aristotle who knew induction was invalid and deduction couldn't cover most of his claims. (Hume was unoriginal in saying induction doesn't work, not only because of Aristotle but also various others. i don't know why Hume gets so much credit about this from Popper and others. Popper wrote that Aristotle not only invented induction but knew it didn't work.)

and it's not just induction that has these problems and equivocations, it's attempts at proof in general ("prove" is another word, like "certain", which Objectivists use to equivocate about fallibility/infallibility). how do you justify your proof? you use an argument. but how do you justify that argument? another argument. but then you have an infinite regress.

Aristotle knew about this infinite regress problem and invented a bad solution which is still in popular use today including by Objectivism. his solution is self-evident, unquestionable foundations.

Aristotle also has a reaffirmation by denial argument, which Peikoff loves, which has a similar purpose. which, like the self-evident foundations, is sophistry with logical holes in it.

Popper says Aristotle was the first dogmatist in epistemology. (Plato was dogmatic about politics but not epistemology). And Aristotle rejected the prior tradition of differentiating episteme (divine, perfect knowledge) and doxa (opinion which is similar to the truth).

the episteme/doxa categorization was kinda confused. but it had some merit in it. you can interpret it something like this: we don't know the INFALLIBLE PERFECT TRUTH, like the Gods would know, episteme. but we do have fallible human conjectural knowledge which is similar to the truth (doxa).

Aristotle got rid of the two categories, said he had episteme, and equivocated about whether he was a fallibilist or not.

here are two important aspects of the equivocation and confusion.

  1. Aristotle claimed his formal logic could PROVE stuff. (that is itself problematic.) but he knew induction wasn't on the same level of certainty as deduction. so he came up with some hedges, excuses and equivocations to pretend induction worked and could reach his scientific conclusions. Popper thinks there was an element of dishonesty here where Aristotle knew better but was strongly motivated to reach certain conclusions so came up with some bullshit to defend what he wanted to claim. (Popper further thinks Aristotle falsely attributed induction to Socrates because he had a guilty conscience about it and didn't really want the burden of inventing something that doesn't actually work. and also because if Socrates -- the ultimate doubter and questioner -- could accept inductive knowledge then it must be really good and meet a high quality standard!)

  2. I talk about equivocating about fallible vs. infallible because I conceive of it as one or the other, with two options, rather than a continuum. But Peikoff and others usually look at a different way. instead of asking "fallible or infallible?" they ask something like "what quality of knowledge is it? how good is it? how justified? how proven? how certain?" they see a continuum and treat the issue as a matter of degree. this is perfect for equivocating! it's not INFALLIBLE, it's just 90% infallible. then when i talk about fallible knowledge, they think i'm talking about a point on the continuum and hear like 0% infallible (or maybe 20%) and think it's utter crap and i have low standards. so they accuse me and Popper of being skeptics.

the concept of a continuum for knowledge quality – something like a real number line on which ideas are scored with amount of proof, amount of supporting evidence/arguments, amount of justification, etc, and perhaps subtracting points for criticism – is a very bad idea. and look at it that way, rather than "fallible or not?" and "there is a known refutation of this or there isn't?" and other boolean questions is really bad and damaging.

Peikoff refers to the continuum with his position that ideas can be arbitrary (no evidence for it. reject it!), plausible (some evidence, worth some consideration), probable (a fair amount of evidence, pretty good idea), or certain (tons of evidence, reasonable people should accept it, there's no real choice or discretion left). he uses these 4 terms to refer to points on the continuum. and he is clear that it's a continuum, not just a set of 4 options.

But there is no something more beyond fallible knowledge, before infallible knowledge. And the ongoing quest for something fundamentally better than unjustified fallible knowledge has been a massive dead end. All we can do is evolve our ideas with criticism – which is in fact good enough for science, economics and every other aspect of life on Earth.


Elliot Temple | Permalink | Message (1)

Epistemology

I wrote:

The thing to do [about AI] is figure out what programming constructs are necessary to implement guesses and criticism.

Zyn Evam replied (his comments are green):

Cool. Any leads? Can you tell more? That's is what I have problems with. I cannot think of anything else than evolution to implement guesses and criticism.

the right answer would have to involve evolution, b/c evolution is how knowledge is created. i wonder why you were looking for something else.

one of the hard problems is:

suppose you:

  1. represent ideas in code, in a general way
  2. represent criticism in code (this is actually implied by (1) since criticisms are ideas)
  3. have code which correctly detects which ideas contradict each other and which don't
  4. have code to brainstorm new ideas and variants of existing ideas

that's all hard. but you still have the following problem:

two ideas contradict. which one is wrong? (or both could be wrong.)

this is a problem which could use better philosophy writing about it, btw. i'd expect that philosophy work to happen before AI gets anywhere. it's related to what's sometimes called the duhem-quine problem, which Popper wrote about too.

one of my own ideas about epistemology is to look at symmetries. two ideas contradicting is symmetric.

what do you mean by symmetries? how two ideas contradicting symmetric? could you give an example?

"X contradicts Y" means that "Y contradicts X". When two ideas contradict, you know at least one of them is mistake, but not which one. (Actually it's harder than that because you could be mistaken that they contradict.)

Criticism fundamentally involves contradiction. Sometimes a criticism is right, and sometimes the idea being criticized is right, and how do you decide which from the mere fact that they contradict each other?

With no additional information beyond "X and Y contradict", you have no way to take sides. And labelling Y a criticism of X doesn't mean you should side with it. X and Y have symmetric (equal) status. In order to decide whether to judge X or Y positively you need some kind of method of breaking the symmetry, some way to differentiate them and take sides.

Arguments are often symmetric too. E.g., "X is right because I said so" can be used equally well to argue for Y. And "X is imperfect" can be used equally well to argue against Y.

How to break this kind of symmetry is a major epistemology problem which is normally discussed in other terms like: When evidence contradicts a hypothesis, it's possible to claim the evidence is mistaken rather than the hypothesis. (And sometimes it is!) How do you decide?

So when two ideas contradict we know one of them at least is mistaken, but not which one. When we have evidence that seems to contradict a hypothesis we can never be sure that it indeed contradicts it. From the mere fact of contradiction, without additional information, we cannot decide which one is false. We need additional information.

Hypotheses are built on other hypotheses. We need to break the symmetry by looking at the hypotheses on which the contradicting ideas depend. And the question is: how would you do that? Is that right?

Mostly right. You can also look at the attributes of the contradicting ideas themselves, gather new observational data, or consider whatever else may be relevant.

And there are two separate questions:

  1. How do you evaluate criticisms at all?

  2. How do you evaluate criticisms formally, in code, for AIs?

I believe I know a lot amount about (1), and have something like a usable answer. I believe I know only a little about (2) and have nothing like a usable answer to it. I believe further progress on (1) -- refining, organizing, and clarifying the answer -- will help with solving (2).

Below I discuss some pieces of the answer to (1), which is quite complex in full. And there's even more complexity when you consider it as just one piece fitting into an evolutionary epistemology. I also discuss typical wrong answers to (1). Part of the difficult is that what most people believe they know about (1) is false, and this gets in the way of understanding a better answer.

My answer is in the Popperian tradition. Some bits and pieces of Popper's thinking have fairly widespread influence. But his main ideas are largely misunderstood and consequently rejected.

Part of Popper's answer to (1) is to form critical preferences -- decide which ideas better survive criticism (especially evidentiary criticism from challenging test experiments).

But I reject scoring ideas in general epistemology. That's a pre-Popper holdover which Popper didn't change.

Note: Ideas can be scored when you have an explanation of why a particular scoring system will help you solve a particular problem. E.g. CPU benchmark scores. Scoring works when limited to a context or domain, and when the scores themselves are treated more like a piece of evidence to consider in your explanations and arguments, rather than a final conclusion. This kind of scoring is actually comparable to measuring the length of an object -- you define a measure and you decide how to evaluate the resulting length score. This is different than an epistemology score, universal idea goodness score, or truth score.

I further reject -- with Popper -- attempts to give ideas a probability-of-truth score or similar.

Scores -- like observations -- can be referenced in arguments, but can't directly make our decisions for us. We always must come up with an explanation of how to solve our problem(s) and expose it to criticism and act accordingly. Scores are not explanations.

This all makes the AI project harder than it appears to e.g. Bayesians. Scores would be easier to translate to code than explanations. E.g. you can store a score as a floating point number, but how do you store an explanation in a computer? And you can trivially compare two scores with a numerical comparison, but how do you have a computer compare two explanations?

Well, you don't directly compare explanations. You criticize explanations and give them a boolean score of refuted or non-refuted. You accept and act on a single non-refuted explanation for a particular problem or context. You must (contextually) refute all the other explanations, rather have one explanation win a comparison against the others.

This procedure doesn't need scores or Popper's somewhat vague and score-like critical preferences.

This view highlights the importance of correctly judging whether an idea refutes another idea or not. That's less crucial in scoring systems where criticism adds or subtract points. If you evaluate one issue incorrectly and give an idea -5 points instead of +5 points, it could still end up winning by 100 points so your mistake didn't really matter. That's actually bad -- it essentially means that issue had no bearing on your conclusion. This allows for glossing over or ignoring criticisms.

A correct criticism says why an idea fails to solve the problem(s) of interest. Why it does not work in context. So a correct criticism entirely refutes an idea! And if a criticism doesn't do that, then it's harmless. Translating this to points, a criticism should either subtract all the points or none, and thus using a scoring system correctly you end up back at the all-or-nothing boolean evaluation I advocate.

This effectively-boolean issue comes up with supporting evidence as well. Suppose some number of points is awarded for fitting with each piece of evidence. The points can even vary based on some judgement of how importance each piece of evidence is. The importance judgement can be arbitrary, it doesn't even matter to my point. And consider evidence fitting with or supporting a theory to refer to non-contradiction since the only known alternatives basically consist of biased human intuition (aka using unstated, ambiguous ideas without figuring out what they are very clearly).

So you have a million pieces of evidence, each worth some points. You may, with me, wish to score an idea at 0 points if it contradicts a single piece of evidence. That implies only two scores are possible: 0 or the sum total of the point value of every piece of evidence.

But let's look at two ways people try to avoid that.

First, they simply don't add (or subtract) points for contradiction. The result is simple: some ideas get the maximum score, and the rest get a lower score. Only the maximum score ideas are of interest, and the rest can be lumped together as the bad (refuted) category. Since they won't be used at all anyway, it doesn't matter which of them outscore the others.

Second, they score ideas using different sets of evidence. Then two ideas can score maximum points, but one is scored using a larger set of evidence and gets a higher score. This is a really fucked up approach! Why should one rival theory be excluded from being considered against some of the evidence? (The answer is because people selectively evaluate each idea against a small set of evidence deemed relevant. How are the selections made? Biased intuition.)

There's an important fact here which Popper knew and many people today don't grasp. There are infinitely many theories which fit (don't contradict) any finite set of evidence. And these infinitely many theories include ones which offer up every possible conclusion. So there are always max-scoring theories, of some sort, for every position. Which makes this kind of scoring end up equivalent to the boolean evaluations I advocated in the first place. Max-score or not-max-score is boolean.

Most of these infinitely many theories are stupid which is why people try to ignore them. E.g. some of the form, "The following set of evidence is all correct, and also let's conclude X." X here is a completely unargued non sequitur conclusion. But this format of theory trivially allows a max-score theory for every conclusion.

The real solution to this problem is that, as Deutsch clearly explained in FoR (with the grass cure for the cold example), most bad ideas are rejected without experimental testing. Most ideas are refuted on grounds like:

  1. bad explanation

I was going to make a longer list, but everything else on my list can be considered a type of bad explanation. The categorizations aren't fundamental anyway, it's just organizing ideas for human convenience. A non sequitur is a type of bad explanation (non explanation). And a self-contradictory idea is a type of bad explanation too. And having a bad explanation (including none) of how it solves the problem it's supposed to solve is another important case. That gets into something else important which is understood by Popper and partly by Rand, but isn't well known:

Ideas are contextual. And the context is, specifically, that they address problems. Whether a criticism refutes an idea has to be evaluated in a particular context. The same idea (as stated in English) can solve one problem and fail to solve another problem. One way to approach this is to bundle ideas with their context and consider that whole thing the idea.

Getting back to the previous point, it's only ideas which survive our initial criticism (including doesn't blatantly contradict evidence we know offhand) that we take more interest in them and start carefully comparing them against the evidence and doing experimental tests. Testing helps settle a small number of important cases, but isn't a primary method. (Popper only partly understood this, and Deutsch got it right.)

The whole quest -- to judge ideas by how well (degree, score) they fit evidence -- is a mistake. That's a dead end and distraction. Scores are a bad idea, and evidence isn't the the place to focus. The really important thing is evaluating criticism in general, most of which broadly related to: what makes explanations bad?

BTW, what is an explanation? Loosely it's the kind of statement which answers why or how. The word "because" is the most common signal of explanations in English.

Solving problems requires some understanding of 1) how to solve the problem and 2) why that solution will work (so you can judge if the solution is correct). So explanation is required at a basic level.

So, backing up, how do you address all those stupid evidence-fitting rival ideas? You criticize them (by the category, not individually) for being bad explanations. In order to fit the evidence and have dumb conclusion, they have to have a dumb part you can criticize (unless the rival idea actually isn't so dumb as you thought, a case you have to be vigilant for). It's just not an evidence-based criticism (and nor should the criticism by done with unstated, based commonsense intuitions combined with frustration at the perversity of the person bringing an arbitrary, dumb idea into the discussion). And how do you address the non-evidence-fitting rival ideas? By rejecting them for contradicting the evidence (with no scoring).

Broadly it's important to take seriously that every flaw with an idea (such as contradicting evidence, having a self-contradiction, having a non sequitur, or having no explanation of how or why it solves the problem it claims to solve) either 1) ruins it for the problem context or 2) doesn't ruin it. So every criticism is either decisive or (contextually) a non-criticism. So evaluations of ideas have to be boolean.

There is no such thing as weak criticism. Either the criticism implies the idea doesn't solve the problem (strong criticism), or it doesn't (no criticism). Anything else is, at best, more like margin notes which may be something like useful clues to think about further and may lead to a criticism in the future.

The original question of interest was how to take sides between two contradicting ideas, such as an idea and a criticism of it. The answer requires a lot of context (only part of which I've covered above), but then it's short: reject the bad explanations! (Another important issue I haven't discussed is creating variants of current ideas. A typical reaction to a criticism is to quickly and cheaply make a new idea which is a little different in such a way that the criticism no longer applies to it. If you can do this without ruining the original idea, great. But sometimes attempts to do this run into problems like all the variants with the desired-traits-to-address-the-criticism ruin the explanation in the original idea.)


Elliot Temple | Permalink | Messages (0)

Valuing Criticism

I wrote to the Fallible Ideas discussion group:

this reminds me of a question: did you find many mistakes in Mises and others when reading them?

Zyn Evam replied (all green quotes):

not many :/

better discuss more!!!!!

and study critical thinking (a branch of philosophy) more!!

how should I do that? what is the best way?

write short posts to FI. daily.

there is no study guide or prepackaged life plan for this. no "learn this then this then this then this and then you're awesome" and you just follow the instructions. you have to lead yourself.

i can give examples of the kinds of things that are good to do. but don't just do my specific examples as if they were a curriculum.

you could talk about what you know and don't know already, and what problems you see with that and what you think is good about it, and ask what problems others see. you could talk about what if anything you think you might need to learn and why.

you could talk about what you think your problems in your life are and your current plans for improving and ask for any better ideas.

One problem I have is not writing much to FI, or writing sporadically, and then stopping. I believe I understand the value of others supplying criticism for my ideas, but I haven't integrated it much with my life. That is not enough. It is not like I feel urged to share my ideas so that others can find faults in them. I think that should be a thing to aim for. I should be excited about others pointing out I am wrong.

It helps to conceptualize "find faults" as "find opportunities for improvement".


It helps to value something highly. Some people value truth, but other values work too. Some people really want to win at video game or sports competitions and they form a good attitude to criticism to help them achieve that goal.

Valuing something highly handles layers of indirection better. If you care a ton about about D, that helps you care about A to help with B to help with C to help with D.

For example, consider a typical person who cares a little about their car and has no interest in paint. Then he won't want to learn about car detailing paints and brushes. He'll only do that if he feels pressured to by e.g. a highly visible scratch. But people who care a ton about their car often form some interest in car-related paint so they can improve their car. And the person who cares even more may learn about mixing custom paint or even manufacturing a new type of paint with different chemistry.

It helps to be interested in stuff in multiple ways. The guy who learns about paint chemistry and manufacturing generally either

1) already had some separate interest in science and business

or

2) he tried looking into them for his car. but once he got started on them, he found he liked them for some reasons independent of his car. so even if he stopped driving and sold his car, he still might continue with them.


It helps to slow down and pay more attention to your life. Try to be consciously aware of what you're doing and intentionally choose it according to some reasoning, rather than get "sucked in" to activities. If you can do that in general with your current activities, it will put you in a better position to make reasoned changes.

Don't try to change everything at once. If you can be more self-aware of your current activities, without changing them, that's a good step. Then you'll be in a better position to evaluate them and decide what, if anything, you actually want to change.


It helps to think about philosophical problems and connections in a regular basis, during your life. Like considering how philosophical issues are relevant to what you're doing and philosophical answers could help with it. You can do this intentionally if it doesn't come naturally to you. E.g. you can take regular 5 minute breaks to do it.

E.g. you could notice you're trying to do something difficult and you want it to work instead of not work, but you have a significant concern that it may not work. Then philosophy about errors is especially relevant. Is there a way to proceed so that error is impossible? Knowing the answer to that matters. If the answer is yes, it'd be good to find out the method. If the answer is no, then is there anything to be done about error besides fatalistically put up with it? etc

Or you could decide you need to learn a new skill for your project. Then philosophy about learning is relevant. Are there better and worse ways to learn? What causes some attempts to learn things to fail? How does one learn faster or better? That's all useful.

Are these kinds of things too abstract for you? You can concretize. The book Understanding Objectivism helps. You can find relevant sections if you search it for terms like "concretize", "concretizing" and "chewing".

Peikoff talked somewhere about his experiences learning from Ayn Rand. She'd tell him an idea, and then he'd go out in the world and notice it in a bunch of places and see it for himself and connect it to a bunch of concretes.


I thought my problem was finding the time. But it has more to do with preferences.

You have something like a prioritized list of stuff you want to do. You look at it and think the stuff above philosophy will take up all your time. You think if you could finish the short term stuff near the top of the list, then you'd have enough time for philosophy.

But you'll always add new things to the list. There's plenty of stuff you could do. So what matters most is how highly prioritized philosophy is. Raising philosophy's prioritization will make a bigger difference than freeing up some time by clearing some things off the list.

Specifically, if you prioritize philosophy above most incoming things being added to your list, then you'll do it often. As a loose approximation, you can think of the incoming new stuff to do as being on a bell curve with a mean of 100 priority and a standard deviation of 15. Then if you prioritize philosophy at 90, it doesn't have much chance. But if you prioritize philosophy at 130, then around 98% of the new additions to your list will be inserted below philosophy.


Elliot Temple | Permalink | Messages (3)

Peikoff Getting Parmenides Wrong

Understanding Objectivism by Leonard Peikoff:

What is the name for the type of person in philosophy who clings to concepts and says, in effect, “Facts may contradict my concepts, but if so, it’s tough on the facts”? A “rationalist.” Rationalism has dominated philosophy (at least the better philosophy) through the ages. Starting way back with Parmenides, who gave an argument as to why change is impossible, and then saw things change in front of his eyes, and said, “They’re not really changing, because that simply does not agree with my unanswerable argument.”

I generally agree with Peikoff's point. I'm going to criticize only the comment on Parmenides.

For context, Peikoff is no casual commenter on the history of philosophy. He's a teacher who's lectured on it. And he considers the lectures good enough to sell:

http://www.peikoff.com/courses_and_lectures/the-history-of-philosophy-volume-1-–-founders-of-western-philosophy-thales-to-hume/

A reader would reasonably expect Peikoff to know what he's talking about regarding Parmenides, and not to have made this statement carelessly. I think Peikoff's own position would be, "I'm familiar with Parmenides and I'm right" rather than, "You're being too picky and can't expect me to know much about Parmenides".

Now, from The World of Parmenides by Karl Popper:

Parmenides was a philosopher of nature (in the sense of Newton’s philosophia naturalis). A whole series of highly important astronomical discoveries is credited to him: that the Morning Star and the Evening Star are one and the same; that the Earth has the shape of a sphere (rather than of a drum of a column, as Anaximander thought). About equally important is his discovery that the phases of the Moon are due to the changing way in which the illuminated half-sphere of the Moon is seen from the Earth.[5]”

So Parmenides was a scientist who made empirical discoveries. He wasn't a rationalist who refused to look at the world.

And Parmenides' main work was about the conflict he found between appearance and reality. As a scientist, Parmenides discovered some ways that appearance and reality don't match. E.g. the Earth looks flat but is spherical. This conflict stood out to Parmenides and interested him, so he wrestled with it, tried to make sense of it, and wrote about it.

Popper proposes Parmenides first made an empirical discovery that the moon doesn't change. Then second, in trying to grapple with it, he came up with the idea that actually nothing changes. [1]

Discovering the difference between appearance and reality is a big deal. It's a hard problem. Early work on the matter wasn't up to modern standards and people got confused, but that doesn't make Parmenides anything resembling a modern rationalist.

So what Peikoff said about Parmenides is completely wrong.


[1] This idea is a lot better than it sounds, by the way. It has some similarities to e.g. modern spacetime theory. Time is tricky and commonsense ideas about time are wrong (see David Deutsch's books for details).

And there was a successor theory to Parmenides', which was that reality consists of atoms and the void (rather than just atoms with no empty space), and the atoms don't change but do move. So Parmenides' idea was fruitful, it helped discuss an important problem and lead to some better ideas.

Parmenides' idea may also have been related to Xenophanes' religious ideas which also have some value (rejecting anthropomorphic gods, monotheism, differentiating perfect/divine truth from fallible human knowledge similarly to Parmenides). Parmenides was a pupil of Xenophanes.


Elliot Temple | Permalink | Messages (0)

Elliot Temple | Permalink | Messages (0)

Churchill and Roosevelt Betrayed Hundreds of Thousands to Death

The Gulag Archipelago by Aleksandr Solzhenitsyn has a lot of stories about how evil the Soviet gulag system was. Below I quote a brief section about Western complicity in Soviet crimes, and about Churchill and Roosevelt in particular.

Context for reading the quote: The "Vlasov army" refers to Russian units in the German army in WWII. They turned against the Germans and saved Prague before the Soviet army arrived. (Soviet histories lie and take credit.) The "act of a loyal ally" refers to the Vlasov army betraying the Germans. And the Soviets routinely punished anyone who'd been taken prisoner by the Germans with 15 years in the gulag (which is preceded by torture). Many other people were executed. One reason is they didn't want people who knew about life in Europe to spread information about it in Russia. The Soviets were so unfairly cruel and murderous it's hard to believe if you haven't read about it. Many people would rather have killed themselves than be executed or tortured and imprisoned by the Soviets. Keep that context in mind when considering turning anyone over to the Soviets.

After saving Prague (bold added):

... the Vlasov army began to retreat toward Bavaria and the Americans. They were pinning all their hopes on the possibility of being useful to the Allies; in this way their years of dangling in the German noose would finally become meaningful. But the Americans greeted them with a wall of armor and forced them to surrender to Soviet hands, as stipulated by the Yalta Conference. In Austria that May, Churchill perpetrated the same sort of "act of a loyal ally," but, out of our accustomed modesty, we did not publicize it. He turned over to the Soviet command the Cossack corps of 90,000 men.

[This surrender was an act of double-dealing consistent with the spirit of traditional English diplomacy. The heart of the matter was that the Cossacks were determined to fight to the death, or to cross the ocean, all the way to Paraguay or Indochina if they had to ... anything rather than surrender alive. Therefore, the English proposed, first, that the Cossacks give up their arms on the pretext of replacing them with standardized weapons. Then the officers—without the enlisted men—were summoned to a supposed conference on the future of the army in the city of Judenburg in the English occupation zone. But the English had secretly turned the city over to the Soviet armies the night before. Forty busloads of officers, all the way from commanders of companies on up to General Krasnov himself, crossed a high viaduct and drove straight down into a semicircle of Black Marias, next to which stood convoy guards with lists in their hands. The road back was blocked by Soviet tanks. The officers didn't even have anything with which to shoot themselves or to stab themselves to death, since their weapons had been taken away. They jumped from the viaduct onto the paving stones below. Immediately afterward, and just as treacherously, the English turned over the rank-and-file soldiers by the trainload—pretending that they were on their way to receive new weapons from their commanders.

In their own countries Roosevelt and Churchill are honored as embodiments of statesmanlike wisdom. To us, in our Russian prison conversations, their consistent shortsightedness and stupidity stood out as astonishingly obvious. How could they, in their decline from 1941 to 1945, fail to secure any guarantees whatever of the independence of Eastern Europe? How could they give away broad regions of Saxony and Thuringia in exchange for the preposterous toy of a four-zone Berlin, their own future Achilles' heel? And what was the military or political sense in their surrendering to destruction at Stalin's hands hundreds of thousands of armed Soviet citizens determined not to surrender? They say it was the price they paid for Stalin's agreeing to enter the war against Japan. With the atom bomb already in their hands, they paid Stalin for not refusing to occupy Manchuria, for strengthening Mao Tse-tung in China, and for giving Kim Il Sung control of half Korea! What bankruptcy of political thought! And when, subsequently, the Russians pushed out Mikolajczyk, when Benes and Masaryk came to their ends, when Berlin was blockaded, and Budapest flamed and fell silent, and Korea went up in smoke, and Britain's Conservatives fled from Suez, could one really believe that those among them with the most accurate memories did not at least recall that episode of the Cossacks?]

Along with them, he also handed over many wagonloads of old people, women, and children who did not want to return to their native Cossack rivers. This great hero, monuments to whom will in time cover all England, ordered that they, too, be surrendered to their deaths.


Elliot Temple | Permalink | Messages (2)

Stop Saying Lies and Other People's Ideas

Maps of Meaning: The Architecture of Belief by Jordan Peterson

I started to hear a “voice” inside my head, commenting on my opinions. Every time I said something, it said something – something critical. The voice employed a standard refrain, delivered in a somewhat bored and matter-of-fact tone:

You don’t believe that.

That isn’t true.

You don’t believe that.

That isn’t true.

The “voice” applied such comments to almost every phrase I spoke.

I couldn’t understand what to make of this. I knew the source of the commentary was part of me – I wasn’t schizophrenic – but this knowledge only increased my confusion. Which part, precisely, was me – the talking part, or the criticizing part? If it was the talking part, then what was the criticizing part? If it was the criticizing part – well, then: how could virtually everything I said be untrue? In my ignorance and confusion, I decided to experiment. I tried only to say things that my internal reviewer would pass unchallenged. This meant that I really had to listen to what I was saying, that I spoke much less often, and that I would frequently stop, midway through a sentence, feel embarrassed, and reformulate my thoughts. I soon noticed that I felt much less agitated and more confident when I only said things that the “voice” did not object to. This came as a definite relief. My experiment had been a success; I was the criticizing part. Nonetheless, it took me a long time to reconcile myself to the idea that almost all my thoughts weren’t real, weren’t true – or, at least, weren’t mine.

All the things I “believed” were things I thought sounded good, admirable, respectable, courageous. They weren’t my things, however – I had stolen them. Most of them I had taken from books. Having “understood” them, abstractly, I presumed I had a right to them – presumed that I could adopt them, as if they were mine: presumed that they were me. My head was stuffed full of the ideas of others; stuffed full of arguments I could not logically refute. I did not know then that an irrefutable argument is not necessarily true, nor that the right to identify with certain ideas had to be earned.

wise, IMO.

ppl overreach by saying a bunch of crap instead of actually doing stuff right and thinking. (and if u recommend they slow down, they often bring up the issue that zero would be a bad amount to talk, too. and then you see them say something really careless they spent 2 minutes on. why can't they consistently spend, say, 5 minutes reviewing each of their posts -- more for really long ones, but don't do those anyway -- and send if everything looks good? that should easily get them a more medium result between rushed and nothing.)

a common, important tip for learning is: better to do something correctly, slowly, then speed up. don't go faster than you know what you're doing and try to fix the mistakes later. this applies to learning to touch type, learning video games, and also writing an FI reply.

Peterson also said in a video somewhere, something like: most of what people say is lies or other people's ideas. they don't have their own ideas or a self. they need to create that. i wonder if he's read The Fountainhead.*

in another video, Peterson said basically that people have been building up lies on top of lies on top of lies, for decades. that's why they have such difficult problems! that's why their lives are such a mess! it's layer and layers and layers of lies to untangle!


Elliot Temple | Permalink | Messages (3)