[Previous] Objectivists Should Vote Trump! | Home | [Next] Twitter Pulls "Make Detroit Great Again" Pro-Trump Video Ad for "Hate, sensitive topics, and violence"

The Harry Binswanger Letter Posts

Today I rejoined The Harry Binswanger Letter (HBL). It's an Objectivist discussion forum. I left in the past because it was moderated and didn't allow me to say some things. It also limited how often you could post to something like 4 posts per week if I remember right. I also had some disagreements with Binswanger (e.g. he hated Popper from a position of ignorance). It's been changed to allow unlimited posting on a web forum and then he only emails selected posts out to the members.

You have to pay for HBL. I don't mind that, but I do mind the lack of public links. (If I ever make a paid forum, maybe I'll have people pay for posting but allow reading for free.)

What I'm going to try doing is reposting my own posts as comments below (unfortunately I'm going to lose the formatting sometimes). I want to have my own copies in case I unsubscribe, they lose their data in a computer crash, or they edit or delete some of my posts. (They actually pay someone to go through and edit formatting and typos. I don't know how far the editing goes.)

Update: I've been banned from HBL. Read my blog post explaining. In short, Binswanger dislikes critical discussion.


Elliot Temple on October 17, 2016

Messages (267)

One-line summary: You should hypotheticalize your personal issues.

You can gain information about your personal issues if you figure out the essentials, then write a hypothetical scenario of general interest involving the important issues. All personal problems have some important impersonal issues involved in them. You will learn, both from the process of figuring out what impersonal questions you don’t know the answer to that are relevant to your personal problems, and also from the responses people write.

Sometimes you may not get exactly the sort of reply you were hoping for. Analyze what happened and try again. You can clarify your question, say how the responder missed the point, or recognize that what you stated in your question is not actually what you wanted to know. When you ask the wrong question – which may happen a lot at first – that isn’t a waste of anyone’s time. The person answering your impersonal question thought the issue was interesting without regard for how it applies to your life, so it was fine for him. And for you, if you’re confused about what your problems are, working on that is a great use of time.


curi at 5:38 PM on October 17, 2016 | #6887 | reply | quote

One-line summary: Make judgements to solve problems, update as useful. Don’t have arbitrary doubts but do watch out for new information or new situations.

You don’t need a single answer to your question. Does a judgment require “regular updating”? Depends! Do you have a reason to update the judgment regularly? Some reasons to consider frequent judgment updates: the person is young and changing quickly. The person seems to have a lot of potential to you, but also some bad traits, so you regularly update your judgment because it could easily change and also because you see potential value here that’s worth your attention. You interact frequently with the person, e.g. your boss, and it’s valuable to you to know if he changes.

Judgments should not be 100% final. People change, like you say. Also, your judgment is fallible. And miscommunications and misunderstandings are common. This does not mean you should arbitrarily double-check your judgment. Reconsider when you have a reason to. Besides the reasons discussed above, you can reconsider judgments when you get new information that contradicts something you thought you knew, and when you learn a new principle that would let you reevaluate it. But still don’t bother updating your judgment unless you have something to gain from a better judgment in this case. Focus your attention on areas of your life where you see potential problems, such as when you have reason to doubt a judgment you made and getting it wrong would be a problem for you.

The overall purpose of making a judgment is to solve some problem you have, e.g., the problem of whether to do business with someone, date someone, etc. Keep the problem in mind as context and update judgments when you think it’ll be useful for addressing your problem better.

Also, people are complex and it’s easy to underestimate how different from you other people are. They have different lives, different background knowledge, different ideas, different ways of speaking, different goals, etc, etc. Misjudging people is easy. So put a lot of effort into judgment when being wrong would harm you substantially. But generally just do your best to judge people, and everything else, using an appropriate amount of effort, and live by your judgment. Never entertain arbitrary doubts, reasonless doubts, purposeless doubts, etc.

You wonder if a judgment needs to be based on “fundamental characteristics.” Sort of. You can judge that someone is a bad chess player by seeing several of his chess games without knowing about his fundamental characteristics. But if you want to make a broad judgement, then you can’t do it using concretes like that. For a judgment that will cover many issues, including ones where you have no experience with how the person handles that particular issue, then you need to get a handle on some of the person’s principles, methods of thinking, philosophical positions, etc. One way to quickly get some insight is by offering a criticism. Good people appreciate criticism because they want to improve their ideas, whereas anti-criticism people are severely limiting their learning. And people commonly either appreciate or dislike criticism in general, rather than varying from issue to issue.


curi at 5:38 PM on October 17, 2016 | #6888 | reply | quote

One-line summary: Statism doesn’t logically follow from altruism.

> E.g., if altruism were true, statism would be right politically.

Do you think that’s actually correct? I’m not sure if it’s a serious example. I disagree with it.

Altruism is a system of ideas containing many contradictions. From a contradiction, logically, anything follows. It’s nonsense. So statism doesn’t follow from altruism any more or less than classical liberalism does.

One could undertake the project of creating a non-contradictory version of altruism. But I’m not convinced that’s possible. I think altruism is too misconceived.

Hypothetically, if one could develop non-contradictory altruism, I think the results would be so evil it’d be basically be pure death (via self-destruction) and not allow for any political system at all. It’d be thorough and immediate destruction which doesn’t allow any state to exist, so it wouldn’t be compatible with statism.


curi at 5:38 PM on October 17, 2016 | #6889 | reply | quote

One-line summary: Logicians have bad goals, which may be behind some of the problems HB sees.

I think logic is good, but the way many people think about logic should be reconsidered. And some of the problems come from bad goals and misunderstanding the purpose of logic.

The proper purpose of logic is something like: to help people think better and understand reality. Logic is a tool for human use in the pursuit of our values.

Most logicians do not understand this or focus their work on helping with it.

Here are three common things logicians do instead which I think are bad. These may shed some light on where logicians have gone wrong. I offer them as potential underlying issues behind some of the detailed problems HB discusses.

Logicians try to impress people. They seek prestige and intellectual reputation so people will think they are smart and listen to their conclusions. They use being a logician as a social status to get attention and credibility. What they should do instead is use logic to figure out better ideas, and then let those better ideas impress people directly.

Logicians are infallibilists. Like many mathematicians too, they are looking for a point at which they can stop thinking. They want answers which are safe, so they no longer have to worry about being mistaken or using further judgement. They want out of context answers so they don’t have to keep thinking as contexts change, rather than wanting knowledge that is helpful in the changing contexts of human lives. They also want to be able to say they have “proven” something and use that as a way to ignore critical rebuttals.

Logicians seek abstract knowledge in a bad way. They are working to discover something like platonic forms – unchanging truths that exist separate from the real world. Rather than focus on methods of thinking with value to people’s lives, they intentionally separate their work from life and think that’s somehow a good thing. They want to do work that’s pure and untainted by human contexts. Their approach to logic isn’t focused on living on earth.


curi at 6:17 PM on October 17, 2016 | #6891 | reply | quote

One-line summary: Thinking work will replace grunt work, and I comment on AI.

Manual labor will decline, but there’s unlimited opportunities for people to do knowledge work such as programming. Boring stuff gets automated but there’s no shortage of problems in people’s lives which better ideas could help solve.

As to AI, it’s either rather limited (e.g. chess playing “AI”) or else you’ve created a genuine person that’s capable of learning and thinking just like a human (“AGI”). There is no in between due to the jump to universality explained in The Beginning of Infinity by David Deutsch. AI either is far too limited to compete with people in general, or is general purpose.

What defines a “person” is the general purpose ability to think and create new knowledge. A person can understand anything that can be understood. (If you object to the terminology “person”, there are alternatives such as “general intelligence” or “universal knowledge creator”.)

Creating an artificial person is like having a kid. In general, more people is a good thing, though it can be problematic in a welfare state with a regulated economy. What if someone wants to create tens of thousands of AIs because it’s much easier to copy an AI than have a kid? It’s up to them (the parent, if you will) to provide resources to the people they bring into existence and help them become independent and self-supporting.

As an aside, the progress of the AI field has been greatly exaggerated. While many special cases (e.g. chess engines and Siri) are doing pretty well, there’s essentially no progress on artificial general intelligence and no promising leads. The reason is that AI researchers don’t understand epistemology, so they have no idea what they are doing or what could possibly work. (Some of them study the wrong epistemologies, e.g. Bayesian epistemology.)


curi at 6:41 PM on October 17, 2016 | #6893 | reply | quote

One-line summary: The pneumonia example has a good point. I propose some refinement.

> For instance, a medical researcher finds a case of pneumonia is cured by penicillin. The conclusion is not: “At least one case of pneumonia is cured by penicillin.” The conclusion is: “Either pneumonia as such, or a certain type of pneumonia as that type, is cured by penicillin.”

I think you’re on to something, but I see a problem here. Addressing it may help clarify matters.

Researchers don’t simply find that penicillin cured up a case of pneumonia. Being pedantic, they find they gave someone penicillin and at a later date he didn’t have pneumonia. Many illnesses go away by themselves, or for some other reason besides the drug you’re trying out. And one has to worry about the placebo effect and double blind trials. That’s problematic because of side effects (e.g. penicillin can cause vomiting, headache, or swelling), but active controls are problematic too.

The point is, to find that penicillin cured even one case of pneumonia requires a complex scientific process. It takes a bunch of reasoning. It takes creative, fallible human thought. It takes considering different explanations for what’s going on and considering criticism.

In that context, here’s the takeaway that I hope you’ll find interesting: the right conclusion to draw from finding penicillin cured a case of pneumonia depends on how you found that out. The details of the whole complex process of scientific thinking affects what conclusions you should draw. In general, from that reasoning process, you already have some understanding which can help guide how broad to make your conclusions and what to focus them on.

For example, let’s consider two different hypothetical penicillin research scenarios. (For simplification just assume all pneumonia is due to bacteria, and let’s not get distracted if I mess up some medical details.)

One researcher knows about bacteria and the cause of pneumonia and is already familiar with other antibiotics. He knows a great deal about chemistry too. He tries penicillin on pneumonia because something about its molecular structure suggests to him that it will function as an antibiotic. He tries it on some bacteria in a petri dish and it works. Then he tries it on a guy with pneumonia and the guy recovers much faster than he’d expect to happen naturally (he measured the state of the pneumonia with some advanced medical instruments, and understands the course of the illness). He concludes that penicillin may be an antibiotic and should be researched further.

A second researcher lives in a more primitive time and doesn’t know about bacteria, cells or molecules. He has no way to discover that penicillin works in a single case (because one single recovery would be totally inconclusive), but he tries lots of stuff (according to some hypotheses he has about what will help cure people) and eventually notices penicillin seems to help a bunch of people with pneumonia. He concludes that penicillin seems to help with pneumonia or a type of pneumonia and also that penicillin should be tried out for potentially curing other illnesses.


curi at 7:39 PM on October 17, 2016 | #6901 | reply | quote

#6888

> And people commonly either appreciate or dislike criticism in general, rather than varying from issue to issue.

I thought the opposite was true. For example, artists often know the value of criticism for their work, but that doesn't mean they would accept criticism in their parenting or be open minded politically.


Anonymous at 11:34 PM on October 17, 2016 | #6903 | reply | quote

#6893

> There is no in between due to the jump to universality explained in The Beginning of Infinity by David Deutsch.

Can you explain this?


Anonymous at 11:36 PM on October 17, 2016 | #6904 | reply | quote

#6889

> Hypothetically, if one could develop non-contradictory altruism, I think the results would be so evil it’d be basically be pure death (via self-destruction) and not allow for any political system at all.

Why do you say non-contradictory altruism would be pure death? How are you imagining it to be like? Altruism suggests that the moral is to benefit others. A person's self-destruction cannot benefit others. Altruism doesn't forbid a person of benefiting themselves. It just doesn't judge self-benefit as moral.


Anonymous at 11:52 PM on October 17, 2016 | #6905 | reply | quote

> I thought the opposite was true. For example, artists often know the value of criticism for their work, but that doesn't mean they would accept criticism in their parenting or be open minded politically.

That's not the opposite. I think you're basically suggesting artists dislike criticism about more or less everything, with one exception (art).

Further, I think most artists do not really like criticism of their work.

> Can you explain this?

Read BoI, like it says. Ask a question when you have something to say.

> Why do you say non-contradictory altruism would be pure death?

Altruism means sacrificing yourself and is not a code one can live under.


Anonymous at 12:09 AM on October 18, 2016 | #6906 | reply | quote

#6906

>> Can you explain this?

>

> Read BoI, like it says. Ask a question when you have something to say.

Why did you explain other things in your own words and do not want to explain this one?

>> Why do you say non-contradictory altruism would be pure death?

>

> Altruism means sacrificing yourself and is not a code one can live under.

Altruism means benefiting others even if you sacrifice yourself. It doesn't imply that you should always sacrifice yourself. It doesn't forbid self-benefit to happen accidentally, it simply says it's of no moral value.

I do not think anyone who defends altruism would argue that you should never benefit yourself. In most contexts, to be of benefit of others you have to exist and you have to benefit yourself first before you can help others. For instance, you can't give to charity if you don't learn how to make money first.


Anonymous at 1:05 AM on October 18, 2016 | #6908 | reply | quote

It might be useful to provide the links for the forum threads where you posted for people who joined the forum and want to follow the discussion.


Anonymous at 1:31 AM on October 18, 2016 | #6909 | reply | quote

>I do not think anyone who defends altruism would argue that you should never benefit yourself. In most contexts, to be of benefit of others you have to exist and you have to benefit yourself first before you can help others.

right, cuz altruism is contradictory. which is what you seem to be talking about.

but the original context was non-contradictory altruism:

>Hypothetically, if one could develop non-contradictory altruism, I think the results would be so evil it’d be basically be pure death (via self-destruction) and not allow for any political system at all. It’d be thorough and immediate destruction which doesn’t allow any state to exist, so it wouldn’t be compatible with statism.


Kate at 9:29 AM on October 18, 2016 | #6912 | reply | quote

> Why did you explain other things in your own words and do not want to explain this one?

big, complicated, hard to talk about to ppl without background knowledge.

> I do not think anyone who defends altruism would argue that you should never benefit yourself.

all real life defenders of altruism believe contradictions, not a PURE version.

> It might be useful to provide the links for the forum threads where you posted for people who joined the forum and want to follow the discussion.

FYI the posts will be easy to find if you join now, they're all near the top of the list.


Anonymous at 10:22 AM on October 18, 2016 | #6916 | reply | quote

ARI should have a discussion forum

Say 100 people read some Rand. Around 10-30 of them will get the wrong idea about each basic point. Confusions are very common. To concretize: 25 people will come away thinking Rand is a libertarian. 10 will think she's an anarchist. 20 will think she's an ally of Hayek. 25 will come away thinking Howard Roark was *intended* to be an unrealistic fictional character. 20 won't know why Dagny came back to New York from Galt's Gulch. 20 won't know why John Galt quit and retreated from the world. 30 will think Rand shouldn't have used the word "selfish" positively and will, by the end of reading The Virtue of Selfishness, forget there was an explanation of that point at the beginning.

Any given person will be confused about some points. If they just read a few books, and stop there, they will come away with major misconceptions. Even an exceptional person may only get 8 out of 10 of the basic points right at first. People will also have a few rare or even unique confusions of their own.

What can sort out all this confusion? Discussion.

There are many pretty straightforward issues where 25% of readers are confused and 75% have roughly the right idea. People will often misunderstand any one correction, but if they read several corrections, often they'll understand one.

This doesn't work so well for controversial or advanced issues. It's still a good start.

Discussion has major advantages over rereading and individual pondering (which are both very important too). People's strengths, weaknesses and blind spots vary. When you try to understand something a second and third time, it's easy to repeat the same mistakes you made the first time. But some other people will find it easy not to make those particular mistakes that you keep making.

I think ARI should have a *free* online discussion place where people interested in Objectivism can ask questions and get answers, especially *young people discussing with peers*. Online is convenient and will attract a lot more participants than in-person meet up groups. I've looked around a lot, and I don't think there's any good place to this currently (nothing even close).


curi at 12:17 PM on October 18, 2016 | #6917 | reply | quote

Sharing Objectivism with children is difficult because they lack freedom

It's hard to reach people under 17 or so because their parents and teachers control a lot of their lives, they have screen time limits and other lack of freedom to pursue ideas, and disagreeing with their parents and teachers poses practical problems for them. This creates a small time window that's somewhat more promising, right around college age, where people have some control over their life but haven't yet decided what kind of life to have and put huge effort into setting it up.

Disagreeing with the people who rule your life is problematic. Parents and teachers judge how much you buy into their ideas and it's hard (and unpleasant) to fake, and dissent is punished. It's hard to get serious about Objectivist values if you aren't integrating them into your whole life. But children in our society are much less free than adults, and trying to live as an Objectivist will lead children to major conflicts with authority. Overall, freedom is a crucial ingredient in learning, especially learning something unpopular.

These issues are more severe with younger children and gradually improve, and they vary by person. Some 10 year olds have more freedom than some 18 year olds.

Unfortunately, it's also hard to reach people over age 6 or so, and basically no one that young has much freedom. As Rand wrote in *The Comprachicos*:

> “Give me a child for the first seven years,” says a famous maxim attributed to the Jesuits, “and you may do what you like with him afterwards.” This is true of most children, with rare, heroically independent exceptions. The first five or six years of a child’s life are crucial to his cognitive development.

Sadly I don't have some wonderful solution to offer, but I think talking about the specifics of the problems we face is worthwhile.


curi at 12:50 PM on October 18, 2016 | #6918 | reply | quote

I agree people dislike rational ideas themselves and have a question about goals

> the main thing holding back the widespread acceptance of Objectivist ideas is that people don’t like them, don’t want them, and aren’t interested (beyond the age of about 22) in questioning their philosophy.

I agree. In my own 15 years of trying to spread rational philosophy ideas, I've found it's mostly the ideas themselves that people don't like. This contradicts a lot of the feedback people give, which is that they object to my style. I'm told that if only I'd be nicer and less critical and judgmental, then people would listen more. "You catch more flies with honey than vinegar." Actually what they want is for me to write less clearly. What they dislike is **clarity** about ideas which point out large problems they have. They get more friendly when the ideas are vague and easy to misunderstand or evade. And the moment you start *successfully communicating* an important idea that disagrees with them, that's when they don't like it (and then one common strategy is they reply with a bunch of bluster about how you're bad at communication and should be more "respectful" or whatever).

> Combatting Dewey’s Progressive Education and replacing it with a rational philosophy of education, and a rational curriculum, is an essential method for countering the mind-destruction of the young

Regarding educational philosophy, I think Taking Children Seriously is good and I disagree with Montessori.

I also have a question: **is the goal to get people to learn a little bit of Objectivism, or a lot?** It's arguable that these would be pursued in the same way (a little understanding may turn into a lot), but I think there's differences worth considering. An analogy is if you want to get a hundred thousand dollars or a hundred million dollars, you don't pursue those the same way. There are a lot of ways to save up $100,000 as an employee that won't get you to $100,000,000. With the bigger goal you should probably start your own business.


curi at 12:56 PM on October 18, 2016 | #6919 | reply | quote

It's tricky to name what's going on to people who don't understand the concepts involved, and because of irrational social rules governing discussion.

Harry Binswanger wrote:

> Another way to handle statements intended as attacks is to name what’s going on. Particularly if there are third parties listening. To “Ayn Rand hated the poor,” you might say: “You’re reaching for some club to smash these ideas. […] [emphasis added]

I’ve used the name-what’s-going-on approach frequently and I like some things about it. But I’ve run into trouble too.

The trouble is I see what’s going on before other people do. I name it clearly, I’m right, but they don’t get it. Sometimes I delay naming it to get more evidence and clarity about what’s going on first, but there’s problems with this too like it can involve participating in a failing conversation and letting it continue to fall apart. And it’s hard to know what it will take for other people to get what’s happening.

I think the quoted example would sometimes have this problem, depending on the people. Many people don’t really know what a club to smash ideas is. (There are also a variety of ways they could misunderstand the further explanation I didn’t quote.) Nor are they accustomed to the intellectual standards that “Ayn Rand hated the poor” doesn’t live up to. They might not clearly see any real difference between that statement and “Didn’t Ayn Rand morally condemn the poor?” because they think imprecisely and look at vague gists of statements. So you name what’s going on and some confused people, especially observers, can feel that you’re massively escalating hostilities and destroying the discussion.

There’s a great deal of ignorance in the world and naming issues can confuse people who often would require years of study to learn the background knowledge involved in really understanding the named issue well.

Part of what’s going on is people consider many types of rational criticism hostile, but are accepting of socially calibrated hostilities and irrationality. There are social rules for what you can and can’t say, what’s polite, what’s being a hostile jerk, etc, and these social rules don’t match the rational rules very well. So audiences often take very nasty comments as acceptable because they are socially acceptable, and take completely fair rebuttals as nasty because the rational reply violates a social rule.

One approach to this is trying to learn the irrational social rules and follow them in your discussions. There are various ways that sucks. I, personally, don’t want to do it and I put some effort into not learning those rules. We all pick up and understand some of the rules and expectations of the culture we live in. But I don’t want to be a person who finds various irrationality natural, intuitive and expected. So I end up offending people frequently and I’m skeptical about people who rarely do.

Another part of the trouble is you’re supposed to gloss over a lot of problems in conversations. It’s socially expected that you ignore a lot of what you don’t like that the other guy said and pay selective attention to a few bits you can agree about. Naming discussion problems that occur goes strongly against this and looks to most people like you’re picking a fight on purpose. Regular people say nasty things to each other all the time, and gloss it over. Nastiness is just part of conversations in their world. Ignoring and glossing over misunderstandings is another major part of mainstream conversations; people often fail to actually communicate much at all and just kinda pretend they had a meaningful discussion.


curi at 1:45 PM on October 18, 2016 | #6921 | reply | quote

Reply to #6916

>> Why did you explain other things in your own words and do not want to explain this one?

> big, complicated, hard to talk about to ppl without background knowledge.

What background knowledge is needed to understand the theory of jump to universality?

>> I do not think anyone who defends altruism would argue that you should never benefit yourself.

> all real life defenders of altruism believe contradictions, not a PURE version.

I was arguing it's not a contradiction of altruism to benefit yourself too.


Anonymous at 2:26 AM on October 19, 2016 | #6939 | reply | quote

Reply to #6918

> But children in our society are much less free than adults, and trying to live as an Objectivist will lead children to major conflicts with authority. Overall, freedom is a crucial ingredient in learning, especially learning something unpopular.

How can a child live as an Objectivist while being dependent on their parents? Isn't a child always a Socialist in practice because they live from hand-outs?

This is also why I don't understand the claim that TCS is compatible with the Capitalist tradition.


Anonymous at 2:26 AM on October 19, 2016 | #6940 | reply | quote

I’m concerned about rigging too (though I disagree about the Electoral College being bad). Check out this Project Veritas video: Rigging the Election – Video I: Clinton Campaign and DNC Incite Violence at Trump Rallies

I’m not very concerned about Trump’s tariff statements. Yes they are awful, but there are bigger things at stake right now. We’re at war. Obama has funded Iran, the leading state sponsor of terrorism, which is building nuclear weapons and ICBMs capable of hitting the United States. Iran is our enemy, they want to destroy us, they openly say they want to destroy us, and they are taking actions to damage us. The Iran deal alone is, in my view, enough reason to vote against 4 more years continuing Obama’s foreign policy (that also created ISIS). I think if Hillary is elected, a lot more Americans will die than if Trump is elected.

---

this was in reply to a guy talking about rigging and also saying he wouldn't vote for Trump due to the tariffs issue. the links were embedded on HBL. i'll just copy them here:

https://www.youtube.com/watch?v=5IuJGHuIkzY

http://curi.us/1804-donald-trump-is-a-protectionist

http://curi.us/1763-terrible-iran-deal


curi at 2:08 PM on October 19, 2016 | #6950 | reply | quote

> How can a child live as an Objectivist while being dependent on their parents? Isn't a child always a Socialist in practice because they live from hand-outs?

>

> This is also why I don't understand the claim that TCS is compatible with the Capitalist tradition.

you are speaking imprecisely. socialism refers to stuff like state control of the means of production.

if the parents were Objectivist, then i think a child could live as an Objectivist, no problem. i was trying to say it's a big problem to be an Objectivist as a child when your parents and teachers strongly disagree and disapprove.

children don't accept parental hand-outs due to laziness, unwillingness to produce, or any Objectivism-incompatible character flaws like that.

children, in a proper life, are productive. they learn things. they accomplish. their parents help them with this. the fact they aren't selling goods/services doesn't mean they aren't living a moral, productive life.

you can think of children as being in the early stages of starting a business or a career. they aren't making a profit yet, but they are working up to it. it's totally reasonable and legitimate that this takes longer, starting from birth, than when an adult starts a business b/c the baby has less background knowledge, preexisting skills, etc

parents know their child will need some resources to get started in life, and make the completely reasonable decision to provide those resources when they decide to have a kid. they get some values out of it that they want. no one loses.


curi at 2:18 PM on October 19, 2016 | #6951 | reply | quote

> What background knowledge is needed to understand the theory of jump to universality?

e.g. understanding universality. lots of epistemology stuff is relevant too.


Anonymous at 2:35 PM on October 19, 2016 | #6957 | reply | quote

question i asked HB

I hadn't heard that "value" is a stolen concept of altruism before. That sounds interesting and promising to me. I'm familiar with the stolen concept idea, but not this application. Could you explain how to think about it?


curi at 2:56 PM on October 19, 2016 | #6961 | reply | quote

reply to guy saying professional philosophers are smarter

By “professional philosopher” I take you to mean a person with both a philosophy oriented career and also a philosophy degree.

I disagree and have had different experiences than you. I’ve found ideas I value primarily come from people without philosophy degrees. This applies both to people I talk with personally, and to books I read.

Ayn Rand didn’t have a philosophy degree, nor did Mises, nor Socrates.

The best discussions I’ve had myself have been with David Deutsch (physicist whose two books say more about philosophy than physics, but his degree and awards are in physics), Thomas Szasz (The Myth of Mental Illness author), and some lay people you’ve definitely never heard of like Justin Mallone (lawyer), Alan Forrester (physicist), some homeschooling parents, and also myself (dialogs). One theme is most of these people have put a large amount of work, effort, reading, studying, etc into thinking well–it just wasn’t done in school philosophy courses. *I find people who are able and willing to study philosophy on their own time, on their own initiative, to more often be good thinkers.*

I don’t think the schools are teaching good stuff in philosophy courses. One piece of evidence for this is Peikoff’s experience (you can skip to 3:55) talking about how his PhD thesis was meaningless garbage and he just wrote filler and then talked to Rand periodically to wipe it from his mind. And he reports Rand’s advice about doing his philosophy PhD:

> You’re in a concentration camp. Write whatever you have to to get past the guards. And then once you escape, wipe it out like torture.

I don’t think that kind of experience helps one reason better! I think an undergraduate philosophy degree is less destructive but still usually bad.

---

copy/paste lost the links for dd, szasz, JM, AF, me, but here's the important one:

http://www.peikoff.com/2009/10/19/episode-084-10192009/


curi at 10:30 AM on October 20, 2016 | #6964 | reply | quote

HB's altruism and value answer

HB answered, in summary: "Value" rests on the concept of "life" (the valuing agent's life) and altruism advocates sacrificing higher values to lower values and nonvalues which is advocating the destruction of value.

I think the basic point is altruism is anti-life and values come from the pro-life worldview.


curi at 11:07 AM on October 20, 2016 | #6965 | reply | quote

> children, in a proper life, are productive. they learn things. they accomplish. their parents help them with this. the fact they aren't selling goods/services doesn't mean they aren't living a moral, productive life.

An adult living on welfare could be learning and accomplishing things, yet you wouldn't call their life moral and productive.

Why is it different for children?

And what kind of productive things do you have in mind that children do?

> you can think of children as being in the early stages of starting a business or a career. they aren't making a profit yet, but they are working up to it. it's totally reasonable and legitimate that this takes longer, starting from birth, than when an adult starts a business b/c the baby has less background knowledge, preexisting skills, etc

How do you know if children are starting a business of career, versus just learning to be moochers?


Anonymous at 3:21 PM on October 20, 2016 | #6968 | reply | quote

> An adult living on welfare could be learning and accomplishing things, yet you wouldn't call their life moral and productive.

>

> Why is it different for children?

You're trying to compare someone who has *done something wrong* to a child who has *done nothing wrong*.


Anonymous at 3:31 PM on October 20, 2016 | #6969 | reply | quote

> I look forward to a Human Action written by an Objectivist

It already exists. Capitalism: A Treatise on Economics by George Reisman. (Free PDF.) Reisman got his PhD under Mises and was a student of Rand. He and his wife are former ARI board members. The book is really good. It reads kind of like Human Action, but also integrates in Objectivist arguments (such as material arguing with environmentalism).

---

links

http://www.capitalism.net/index.html

http://www.capitalism.net/Capitalism/CAPITALISM_Internet.pdf


curi at 3:46 PM on October 20, 2016 | #6973 | reply | quote

Fix the world with philosophical education, not a protest vote.

> The government today is the direct result of your choosing the lesser of two evils, for generations.

I agree that electing the lesser of two evils never solves the problem. But I still advocate voting for the lesser of two evils. Why?

Elections are not how you solve major societal problems. The problems are due to ideas, and will be solved by ideas. The solution can only be philosophical education, which is more effective outside of electoral politics. After making progress with philosophical education, then we’ll get some better candidates in future elections. Sadly there’s no quick or easy solution here.

The purpose of your vote this time, at this point in the process, is to choose between Trump or Hillary (or you can not participate in deciding which of them is better). That’s it. There’s no opportunity to save the country by your vote. There’s no opportunity to vote for capitalism or limited government. Unfortunately those won’t be the outcome this year. If you want classical liberal options, pursue philosophical education of the public in time for the next election, or the one after. It’s very difficult to change the philosophical direction of a country and persuade millions of people, but that’s what must be done, step by step, however long it takes. Ideas rule the world, and nothing but spreading good ideas can actually solve our problems.


curi at 7:49 AM on October 21, 2016 | #6977 | reply | quote

Ayn Rand opposed third parties.

Ayn Rand said in the Objective Communication lecture 1 Q&A (2:29:50):

> [If you want to do range-of-the-moment political activity, you can volunteer at the political campaign of a good candidate.] But a Republican or a Democrat. Don’t go for third parties. They are all cranks, and they are all in it only for power.

If you’re considering a third party vote, you should first listen to Rand’s full answer as well as Peikoff’s comments a few minutes earlier.


curi at 6:05 PM on October 21, 2016 | #6995 | reply | quote

>> An adult living on welfare could be learning and accomplishing things, yet you wouldn't call their life moral and productive.

>>

>> Why is it different for children?

> You're trying to compare someone who has *done something wrong* to a child who has *done nothing wrong*.

What has the adult done wrong?


Anonymous at 9:03 AM on October 22, 2016 | #6997 | reply | quote

> What has the adult done wrong?

the moocher living on welfare – draining wealth from strangers who don't want to support him – who is grossly violating Objectivism? what has he done wrong? Read *Atlas Shrugged*!


Anonymous at 9:20 AM on October 22, 2016 | #6998 | reply | quote

a reply about Popper's influence and good ideas

Influence is hard to judge. Short misrepresentations of Popper are much more commonly taught at schools than teaching anything good.

Secondary sources on Popper are extremely unreliable, and Popper's best points are easily misunderstood because they are unusual and counterintuitive. People who talk about Popper as a falsificationist, skeptic or positivist don't understand Popper well.

The Popperian Rafe Champion investigated Popper's influence. He wrote:

> in 1989 I surveyed the undergraduate courses and reading lists in Philosophy, Politics and Sociology in the (then) 21 Australian universities. The objective was to find what they were being told about Popper and Hayek who I regarded as the twin pillars of anti-scientism and classical liberalism. The short answer is that you had to be very lucky to get more than a passing reference to Popper and the situation with Hayek was worse.

In 1998 he took another look:

> I searched 200+ websites of philosophy schools, mostly in the US but also a few others like Cambridge. The story was the same, Popper rated a mention as a part of the convulsion in the field that involved Kuhn, Lakatos, Feyerabend and the sociology of science. No indication that his contribution was more robust or extended into other fields where he did first order work - evolutionary epistemology, logic and probabililty theory, metaphysics, philosophy of mind, history of ideas, politics and the social sciences. Cambridge at that time was arguable the worst course in the world in terms of keeping positivism alive!

Note that Kuhn, Lakatos, and Feyerabend are enemies.

Then in 2009:

> In recent years some writers have emerged with a strong line of philosophy books for layfolk - the names that come to mind include Grayling and de Botton, with others who have been less prolific ... [my] impression is that Popper might get a para and a footnote, or maybe even a couple of pages, half devoted to the failure of falsificationism. This is weird because I never thought that "falsificationist" was an appropriate label for Popperism, that is just the most obvious point of difference from the positivists.

And he tried looking at academic books again but found:

> All that I have examined so far present a more or less distorted account of Popper's ideas. Some of the errors are repeated, almost word for word, suggesting that there is some primary source that they are drawing on (not yet identified because they don't tend to quote with citations, they just have reading lists at the end of the book or the chapter).

Champion also provided some details and quotes. I've also looked at some books myself and found the same thing.

A fair amount of people have heard of Popper. He's been much more influential than most thinkers. But hardly anyone understands much about Popper's original ideas.

Popper's best work is in epistemology and much of it is compatible with Objectivism. I will start a new discussion about that in the future after I'm more familiar with this community. I expect lots of disagreement.

Some less controversial positive points about Popper:

_The Open Society and Its Enemies_ offers criticism of Marx, Plato and Hegel. It's worth reading (unlike most political philosophy books). It's not as good as Rand's criticism, but it's different. It offers some different perspective on criticizing them and makes some different arguments. As an example, a good point Popper makes is his criticism of "Who should rule?" questions. Rather than focus on getting the right leaders or policies in charge of society (and entrenching them there), it's better to work on setting up a good system for changing mistaken leaders and policies without violence.

Popper's work on the pre-Socratics, like Thales, Xenophanes and Parmenides, is very good. He has a good methodology.

Popper's criterion of demarcation between science and non-science is influential and helped emphasize the connection between science and observation. Popper helped combat the claims to "scientific" status by Marx, Freud and Adler.

The main point of _The Poverty of Historicism_ is true and someone needed to say it.

The title essay in _The Myth of the Framework_ is good work. It argues that it's possible to communicate and make rational progress despite having different frameworks (different languages, different cultures, different worldviews, different premises.) In other words, it's a refutation of polylogism, which HB has been discussing in regards to Mises.

Popper refuted logical positivism by pointing out what it says about itself.

Popper argued against the linguistic analysis school of thought.


curi at 3:04 PM on October 23, 2016 | #7002 | reply | quote

Examples of misunderstood thinkers.

Right. I’ve seen many misunderstandings with thinkers I think are good and have studied.

Short summaries of Rand get passed around in the culture that are really atrocious. I’m sure you guys have seen plenty of awful misstatements of Objectivism, of Mises, and of capitalism.

People mistakenly believe that Thomas Szasz was an anti-psychiatrist like R. D. Laing, despite Szasz’s repeated clarifications about his actual positions and how he differs from Laing (they aren’t even similar).

People mistakenly believe that William Godwin was a socialist and a French Revolution supporter, despite e.g. the anti-revolutions chapter in An Enquiry Concerning Political Justice.

People mistakenly believe that Edmund Burke was an arch-conservative (he was a classical liberal). There’s also a widespread myth that Burke was after money and power, when in reality he passed up lots of money and power to follow his principles. Here’s a statement of the myth:

> Lewis Namier, a dominant modern historian of eighteenth-century Britain, regarded Burke as no more than an opportunist pamphleteer, a paid functionary of the Rockingham machine. Namier and his followers dismissed the idea that Burke had a mind and a philosophy or a set of influential arguments as a sentimental fantasy indulged by amateurs.

It’s not just individuals who are misunderstood and distorted. The other day I was reading Greek Ways: How the Greeks Created Western Civilization. It begins by discussing how lots of classicists have been attacking the achievements of Greece and its role in Western civilization. They focus on some flaws like how Greeks were slave owners and women had lower status than men. All ancient societies had flaws like those, but the Greeks had unique achievements! Then it has a chapter setting the record straight on Greek attitudes to sexuality. It turns out they weren’t so big on promiscuity, homosexuality and orgies as people have been told. Cupid (called Eros in Greek), I was surprised to read, was actually feared (bow and arrows were a fearsome weapon back then, not cute) and self-control was valued.


curi at 10:14 AM on October 24, 2016 | #7009 | reply | quote

Propagandists hijack cues.

> To give another example, public opinion has shifted drastically away from the position that cigarettes are safe thanks to the testimony of scientists, which provides a reliable low information cue to people who don’t have time to investigate the research.

And, seeing that that cue works well, propagandists have started making claims based on the testimony of scientists. This weakens the cue and also spreads misinformation. For example, we’re flooded with propaganda about how virtually every scientist agrees with the whole global warming story, and therefore we should too. This has fooled a lot of people, and has also contributed to increased skepticism of scientists by some. (“If scientists can be leftist activists about global warming, they could be leftist activists about evolution too.”)

When people judge ideas using shorthand proxies, then a lot of effort goes into manipulating and misrepresenting those proxies. And that’s bad for the side with true ideas. Being right matters less when debating proxies (e.g. how many scientists agree with global warming and how good are their credentials) instead of the actual ideas (e.g. how are the warming-predicting climate models designed and are they reliable?)

Skill at manipulating low-information decision makers is pretty independent of skill at truth-seeking. So the people with better ideas win much less reliably than in a contest directly about ideas (like a debate).

Many attention-getting techniques, like using funny “meme” pictures and emoji, are available to people with bad ideas about as much as to people with good ideas.

I’ve also criticized manipulating people with misleading charity donation-matching drives. Tricking people with donation-matching advertisements is approximately independent of being an efficiently-run charity with an important cause.

Part of **my thinking about changing public opinion is to focus on doing things where having good ideas makes a large difference, rather than a small difference**.

---

http://blog.givewell.org/2011/12/15/why-you-shouldnt-let-donation-matching-affect-your-giving/


curi at 11:04 AM on October 24, 2016 | #7010 | reply | quote

myth of mental illness

> Is it really your view that there is no such thing as mental illness, and that “All the observations made are compatible with other explanations”?

Yes. Start with Thomas Szasz's brief manifesto. Then see The Myth of Mental Illness: Foundations of a Theory of Personal Conduct and my essay on "mental illness".

"Mental Illness" is a mechanism of *social control* which uses the prestige of medical science to attack inconvenient, disapproved *ideas* and *(mis)behavior*.

> The way I look at it, physical malfunctions of the body can interfere with every other human capacity, such as breathing, motor functions, vision, etc. Therefore, it would be very strange if there were no mechanisms by which man’s cognitive functions could be disrupted by physical malfunctions of the brain.

Physical malfunctions are not "mental illnesses". They are given other names. They are brain tumors or Alzheimers. Those are physical illnesses.

"Mental illness" is a *metaphor* applied to cases where there is no detectable physical damage and some ideas or behaviors by a person are *unwanted* (by someone who has a complaint that our society considers legitimate). That's why mental illness is "diagnosed" by behavior and ideas.

> perhaps your comments are rather directed towards the overwhelming majority of mental illnesses, which would include depression, and anxiety disorders, as well as OCD and ADD.

Not just those. See Szasz's Schizophrenia: The Sacred Symbol of Psychiatry. These comments are also directed towards autism. But not towards cancer.

> many of the people who suffer from these conditions clearly seem to need some kind of help. If you’re going to eliminate the concept “mental illness,” you need some way to address the problems it’s currently being used to solve. What are the “other explanations”? What are the alternative solutions?

Yes. People have personal *problems in their lives*, and have conflicts with others. Help often helps. The issue here is *problem solving*, which you already know about. Advice from friends can help. Buying a dishwasher can solve a conflict about washing dishes. Reading _Atlas Shrugged_ can help. Methods of problem solving are as varied as human activity.

Szasz talks about *the medicalization of everyday life*. Having problems, and trying to solve them, is part of everyday life. Many things can be done about it. But *it's not a medical issue*.

I don't object on principle to paying people for helping discussions. Nor do I object to those paid consultants having training. Some things I do object to about "therapy":

- Calling talking "therapy", which mixes it up with *medicine* and *healing*. It is (or at least should be) discussion and problem-solving.

- After confusing life problems with medical problems, then giving people drugs alleged to "treat" their problems.

- Involuntary "treatment".

- The de-facto *criminalization* of suicide.

- The actual training professional "therapists" get today has large flaws.

- When Joe's "therapy" is paid for by someone else who has different interests than Joe. (It's OK if the state, employer or parent gives Joe *cash*, with no strings attached, and then Joe himself chooses to buy advice. I mean it's OK on this topic, but that would violate the proper role of government.)


curi at 9:41 AM on October 25, 2016 | #7014 | reply | quote

defending szasz from HB

I agree with most of this and have observed some similar things myself (e.g. often people say they like an idea, but then don’t do anything with it in their life). BTW Popper says similar stuff to your comments on self-evidency.

My experience is that people also frequently think they understand an idea when they don’t know much about it. They mistake superficial summary knowledge of an idea for knowing it. They don’t chew and discuss the idea. Then they can’t implement the idea because they don’t know much about it. They don’t put a bunch of effort into learning it more, either, because they don’t value ideas much or know how substantive ideas can be (they don’t realize what they’re missing). Also they don’t know how to learn ideas better even if they were willing to spend time on it. Schools contribute to these problems by teaching people bad methods of learning and bad (very low) standards for what counts as understanding an idea. People routinely pass school tests while confused about the material and while writing vague crap on the test, and they learn that’s what’s expected.

Also people are generally very passive about their lives, even when they do recognize problems such as ignorance (and school plays a role their too by teaching people to sit still, wait for authority to tell them what to do, and then obey instructions … but don’t take initiative or make your own judgements about what to do.)

However, Szasz is an exception who has a good perspective on the role ideas. HB claims:

> Szasz wrongly claims that psychological problems that aren’t neural are “behavioral.”

But Szasz’s manifesto says:

> The term “mental illness” refers to the undesirable thoughts, feelings, and behaviors of persons. Classifying thoughts, feelings, and behaviors as diseases is a logical and semantic error, like classifying the whale as a fish. …

If we recognize that “mental illness” is a metaphor for disapproved thoughts, feelings, and behaviors, we are compelled to recognize as well that the primary function of Psychiatry is to control thought, mood, and behavior. [bold added]

The word “behavioral” is not in the manifesto. So I checked The Myth of Mental Illness. It’s there three times, but it’s not used in the way HB relates with quote marks. Instead Szasz writes (ch. 13):

> Hysteria is thus mainly a coercive game, with small elements of self-help and still smaller elements of cooperation blended in. This view implies that the hysteric is unclear about his values and their connection with his behavior.

>

> ...

>

> The women in Anna O.’s position [nursemaid for a sick father] were—as are their counterparts today, who feel similarly entrapped by their small children—insufficiently aware of what they valued in life and of how their own ideas of what they valued affected their conduct. For example, young middle-class women in Freud’s day considered it their duty to take care of their sick fathers. Hiring a professional servant or nurse for this job would have created a moral conflict for them, because it would have symbolized to them as well as to others that they did not love their fathers. [emphasis added]

Szasz is writing about how “hysterics” respond to disliked situations with “hysteria” instead of with clear thinking about the role of their own moral ideas (often picked up uncritically and somewhat unintentionally from their culture) in their behavior and situation.

It’s easy to misunderstand people who have different contexts than your own. I’ll say more about communication and miscommunication in my reply regarding Popper which I’m writing.


curi at 11:45 AM on October 26, 2016 | #7029 | reply | quote

The Myth of Mental illness

I was going to post the following in the Myth of Mental Illness thread. But then I saw Alan's post in the same thread, and I thought it covered what I was going to say better. So I didn't post mine.

===

Re: Harry Binswanger's post 13710 of 10/25/16

One-line summary: being deemed "insane" is different from posing objective threat

Harry Binswanger writes:

> Szasz is against involuntary institutionalization of the insane. But institutionalization is clearly necessary when the person poses an objective threat.

How is the second sentence relevant to Szasz's position? Some people may be deemed "insane" and yet not pose an objective threat, while others may pose an objective threat and yet not be deemed "insane". I expect Szasz is against the institutionalizing the former, and in favor of institutionalizing the latter.


Alisa at 12:13 PM on October 26, 2016 | #7030 | reply | quote

post it

this is typical. you should post your thoughts to expose them to criticism! and because you don't say the identical thing as other people!

> I expect Szasz is against the institutionalizing the former, and in favor of institutionalizing the latter.

no. he's in favor of *trials and criminal justice* for the latter (the people who pose an objective threat), which is different than institutionalizing them.

> One-line summary: being deemed "insane" is different from posing objective threat

this is good and Alan doesn't say the same thing.

it's important that ideas are promoted in multiple ways, from multiple perspectives. Alan's style won't work well for all readers. nor will mine. some people will prefer Alissa's style.

as one tangible example, Alan's post in the thread is long and Alissa's is short. some people won't read Alan's post but would read Alissa's. and some people will appreciate that Alissa selected a particular point of higher importance to focus on, rather than covering everything. there's value in covering everything, but there's also value in emphasizing key points.

help us teach the world to think. don't throw away good content you already wrote!


curi at 12:22 PM on October 26, 2016 | #7031 | reply | quote

> you should post your thoughts to expose them to criticism!

I agree I should have posted it SOMEWHERE. I posted it here, and got crit. I'm not interested in what the HBL letter ppl would say.

> he's in favor of *trials and criminal justice* for the latter (the people who pose an objective threat), which is different than institutionalizing them.

That's a more precise way to put it.


Alisa at 12:27 PM on October 26, 2016 | #7032 | reply | quote

> That's a more precise way to put it.

it's not more precise. it's correct instead of incorrect.

> I'm not interested in what the HBL letter ppl would say.

why? do you not want to understand the world? do you not think Objectivism matters? and do you not care about spreading FI to the world?


Anonymous at 12:31 PM on October 26, 2016 | #7033 | reply | quote

>> That's a more precise way to put it.

>

> it's not more precise. it's correct instead of incorrect.

I guess you could say that. I was speaking more broadly, more vaguely.

> > I'm not interested in what the HBL letter ppl would say.

> why? do you not want to understand the world? do you not think Objectivism matters?

No. Regarding those two questions, the reason is that the crit I see on FI is way higher quality than what I see on HBL. It's not like I'm caught up on FI and looking for more. I haven't come close to exhausting my highest-quality resource.

> and do you not care about spreading FI to the world?

No. I do care about that. I just have other ways of doing it, as you know.


Alisa at 12:38 PM on October 26, 2016 | #7034 | reply | quote

> No. Regarding those two questions, the reason is that the crit I see on FI is way higher quality than what I see on HBL. It's not like I'm caught up on FI and looking for more. I haven't come close to exhausting my highest-quality resource.

it's important to be familiar with other perspectives besides FI.

there is FI participation at HBL so you can get things like FI ppl referencing what you say in their contributions (if you post there).

you already wrote this and it'd be basically free to post it to HBL.

you're basically saying you'll do it later, after a bunch of other stuff you aren't doing much. "i should do X, which i'm not doing, so i won't do Y either b/c X is more important than Y" is a dumb way to get stuck and not do anything.

preventing yourself from participating on HBL will not help you participate more on FI. actually the opposite. getting interested in a philosophy topic via HBL (which you apparently were interested enough to write this) would lead to more time spent on FI stuff. by blocking stuff like this you block many of the ways you could get involved with FI.

>> and do you not care about spreading FI to the world?

> No. I do care about that. I just have other ways of doing it, as you know.

why not do this way too?


Anonymous at 12:45 PM on October 26, 2016 | #7035 | reply | quote

The Myth of Mental illness

Harry Binswanger wrote:

> But institutionalization is clearly necessary when the person poses an objective threat.

Elliot Temple wrote:

> [Szasz is] in favor of *trials and criminal justice* for the latter (the people who pose an objective threat), which is different than institutionalizing them.

Just to be clear, being institutionalized includes being in prison. I didn't want to argue with how precise HB was being. I could have argued with HB and said: No, institutionalization isn't necessary for people who pose an objective threat -- what's necessary are criminal justice and trials.

But that wasn't my main point. I don't think there's a big conflict between HB and Szasz about what to do with people who pose an objective threat. I wanted to focus on the mental health issue.


Alisa at 12:48 PM on October 26, 2016 | #7036 | reply | quote

> it's important to be familiar with other perspectives besides FI.

Why?

> you're basically saying you'll do it [post on HBL] later,

No. I expect I may never catch up at FI.

> why not do this way too?

Irrationality, laziness, lack of initiative, etc. My usual excuses.


Anonymous at 12:50 PM on October 26, 2016 | #7037 | reply | quote

> I don't think there's a big conflict between HB and Szasz about what to do with people who pose an objective threat.

there is a big conflict b/c HB wants many of them put in "mental hospitals" and e.g. forcibly drugged, and szasz doesn't.


Anonymous at 1:04 PM on October 26, 2016 | #7038 | reply | quote

> > it's important to be familiar with other perspectives besides FI.

> Why?

- if you don't know other perspectives, you have no idea if FI is better or not. this may be one of the reasons for your supposed "laziness" re learning FI – your ignorance about its superiority and how much and what difference(s) it actually makes.

- b/c the world matters and you live in it and deal with people

- b/c you have all sorts of non-FI ideas and it helps you understand yourself better


Anonymous at 1:07 PM on October 26, 2016 | #7039 | reply | quote

>> I don't think there's a big conflict between HB and Szasz about what to do with people who pose an objective threat.

> there is a big conflict b/c HB wants many of them put in "mental hospitals" and e.g. forcibly drugged, and szasz doesn't.

Both Szasz and HB want people who pose an objective threats to be institutionalized. But the word "institutionalized" can also be used to mean some things that Szasz would not endorse.

I agree this conflict is important. Maybe it shouldn't be swept under the rug just because the same word can be used in both cases.


Anonymous at 1:54 PM on October 26, 2016 | #7040 | reply | quote

> Why?

- if you don't know other perspectives, you have no idea if FI is better or not.

I agree that finding things better than FI would be a good reason to be familiar with non-FI stuff. However, I don't think I'm going to find something better than FI on HBL. If you disagree, or if you know of a place where I could plausibly find something better than FI, I would like to hear about it.

> - b/c the world matters and you live in it and deal with people

So?

> - b/c you have all sorts of non-FI ideas and it helps you understand yourself better

I'm already surrounded with non-FI ideas, just through living in the world. I don't have to do anything special to be exposed to those ideas. What might help is to understand those ideas better. I don't know of a better place than FI to obtain that understanding.


Alisa at 1:57 PM on October 26, 2016 | #7041 | reply | quote

in this context, "institutionalize" means put someone in a "mental hospital" or other psychiatric institution and does not include imprisoning them in a regular prison at all.


Anonymous at 1:58 PM on October 26, 2016 | #7042 | reply | quote

>> - b/c the world matters and you live in it and deal with people

> So?

Just to expand on this - my question here is really, why wouldn't I just start on FI list if I wanted to learn more about this?

You keep recommending non-FI stuff, but I don't think non-FI stuff is my bottleneck ATM.


Alisa at 2:00 PM on October 26, 2016 | #7043 | reply | quote

> I agree that finding things better than FI would be a good reason to be familiar with non-FI stuff.

that's not what that sentence says. read it again.

> So?

so information about the world has value and relevance.

> I'm already surrounded with non-FI ideas, just through living in the world.

different ones in different ways. do you really think this would be wasted *duplicate effort*? your reply seems dishonest.


Anonymous at 2:00 PM on October 26, 2016 | #7044 | reply | quote

> in this context, "institutionalize" means put someone in a "mental hospital" or other psychiatric institution and does not include imprisoning them in a regular prison at all.

Ah. If I thought that's what HB meant, then I would have argued with him.


Alisa at 2:01 PM on October 26, 2016 | #7045 | reply | quote

> You keep recommending non-FI stuff, but I don't think non-FI stuff is my bottleneck ATM.

i recommended *here* that you don't **shut down interests** you actually were interested in enough to write about!!

you are destroying your own philosophy-related interests, on purpose, with bad arguments/excuses for why it's wise to do so ... and then wondering why you aren't pursuing philosophy.

if you have a criticism of some *other* recommendation you should both state it and criticize it.


Anonymous at 2:03 PM on October 26, 2016 | #7046 | reply | quote

>>> - if you don't know other perspectives, you have no idea if FI is better or not.

>> I agree that finding things better than FI would be a good reason to be familiar with non-FI stuff.

> that's not what that sentence says. read it again.

I just did. My reply still looks good to me. I stand by it.

> so information about the world has value and relevance.

Some information that solves my problems has relevance. In general there's a lot of information about the world that has no value and no relevance to me.

>> I'm already surrounded with non-FI ideas, just through living in the world.

> different ones in different ways. do you really think this would be wasted *duplicate effort*? your reply seems dishonest.

Do I really think *what* would be wasted duplicate effort? Posting on HBL? I don't see posting on HBL as a good way to learn about the world compared to posting on FI, if that's what you mean.


Anonymous at 2:04 PM on October 26, 2016 | #7047 | reply | quote

>> You keep recommending non-FI stuff, but I don't think non-FI stuff is my bottleneck ATM.

> i recommended *here* that you don't **shut down interests** you actually were interested in enough to write about!!

I posted here. And I agree I should have done it on my own initiative.

> you are destroying your own philosophy-related interests, on purpose, with bad arguments/excuses for why it's wise to do so ... and then wondering why you aren't pursuing philosophy.

I mean, I do lots of bad things, so in general I probably am destroying my philosophy related interests. I don't see how posting here instead of HBL is an example of that. If you mean my failure of initiative to do it on my own, then OK, guilty as charged.

> if you have a criticism of some *other* recommendation you should both state it and criticize it.

What other recommendation?


Alissa at 2:05 PM on October 26, 2016 | #7048 | reply | quote

> I just did. My reply still looks good to me. I stand by it.

your reply is a non sequitur.

the sentence says (conditionally) that you don't know how good FI is.

you then start talking about the value finding better things. you ignored the problem stated (not knowing how good FI is) and talked about something else (that you could maybe search out some new great stuff).

> I don't see posting on HBL as a good way to learn about the world compared to posting on FI, if that's what you mean.

why do you think you wrote an HBL reply in the first place?


Anonymous at 2:07 PM on October 26, 2016 | #7049 | reply | quote

> What other recommendation?

when you wrote:

> You keep recommending non-FI stuff, but

you were talking about multiple recommendations (due to the meaning of "keep").


Anonymous at 2:11 PM on October 26, 2016 | #7050 | reply | quote

>>>> - if you don't know other perspectives, you have no idea if FI is better or not.

>>> I agree that finding things better than FI would be a good reason to be familiar with non-FI stuff.

>> that's not what that sentence says. read it again.

> your reply is a non sequitur.

Ok, I see what you mean.

> the sentence says (conditionally) that you don't know how good FI is.

Fair, I probably don't know how good FI is.

> you then start talking about the value finding better things. you ignored the problem stated (not knowing how good FI is) and talked about something else (that you could maybe search out some new great stuff).

How would learning about other stuff help me learn how good FI is unless the other stuff is plausibly better than FI? I don't see HBL as a candidate for that.


Alisa at 2:13 PM on October 26, 2016 | #7051 | reply | quote

>> You keep recommending non-FI stuff, but

> you were talking about multiple recommendations (due to the meaning of "keep").

I meant "keep recommending" as in "have been repeatedly recommending". I didn't see you making multiple different recommendations, just trying multiple times to get the same idea across to me.


Anonymous at 2:15 PM on October 26, 2016 | #7052 | reply | quote

> How would learning about other stuff help me learn how good FI is unless the other stuff is plausibly better than FI? I don't see HBL as a candidate for that.

if you know about some things that are worse than FI it gives you a point of comparison.

btw i bet lots of ppl at HBL could have told you that.


Anonymous at 2:24 PM on October 26, 2016 | #7053 | reply | quote

> I meant "keep recommending" as in "have been repeatedly recommending".

oh. i didn't read it that way b/c i don't recall doing that. i thought i made one recommendation then discussed some sub-points.


Anonymous at 3:06 PM on October 26, 2016 | #7054 | reply | quote

> > How would learning about other stuff help me learn how good FI is unless the other stuff is plausibly better than FI? I don't see HBL as a candidate for that.

> if you know about some things that are worse than FI it gives you a point of comparison.

I agree with that sentence in isolation, but the context is that I already know a ton of things that are worse than FI.

> btw i bet lots of ppl at HBL could have told you that.

Probably. But I already knew it.


Alisa at 3:19 PM on October 26, 2016 | #7055 | reply | quote

> > I meant "keep recommending" as in "have been repeatedly recommending".

> oh. i didn't read it that way b/c i don't recall doing that. i thought i made one recommendation then discussed some sub-points.

Ah, I see what you mean. So I was wrong: you didn't repeatedly recommend; you basically recommended once and then made sub-points related to that.


Anonymous at 3:20 PM on October 26, 2016 | #7056 | reply | quote

i don't think you have good, relevant points of comparison. you seem to be ignoring relevance when you just speak of "a ton of things" and give no examples.


Anonymous at 3:21 PM on October 26, 2016 | #7057 | reply | quote

> i don't think you have good, relevant points of comparison. you seem to be ignoring relevance when you just speak of "a ton of things" and give no examples.

Examples of things that are worse than FI:

- being vague

- wilful blindness, purposefully evading things, like an ostrich sticking its head in the sand

- being passive

- taking things out of context

I'm surrounded with this stuff, plus I do it all the time.

What kind of thing did you have in mind for comparison? And do you really think time on HBL is better spent than time catching up on FI?


Alisa at 3:34 PM on October 26, 2016 | #7059 | reply | quote

none of those are systems of ideas.

none of those are concrete.


Anonymous at 3:34 PM on October 26, 2016 | #7060 | reply | quote

> And do you really think time on HBL is better spent than time catching up on FI?

this is a straw man. i was clear about what i thought last time you brought this up: you shouldn't self-sabotage by making excuses not to post your own content after you write it.

> > I don't see posting on HBL as a good way to learn about the world compared to posting on FI, if that's what you mean.

> why do you think you wrote an HBL reply in the first place?

why didn't you answer this?


Anonymous at 3:36 PM on October 26, 2016 | #7061 | reply | quote

> you shouldn't self-sabotage by making excuses not to post your own content after you write it.

I agree about this. Although I can sort of make some bs excuses about why I didn't post it on my own initiative, I'm not making excuses for that going forward. I'm persuaded that posting it on curi.us would have been good this time and would be good in the future.

>> why do you think you wrote an HBL reply in the first place?

> why didn't you answer this?

I missed it. I think I wrote the reply in the first place because I thought I had something important and good to say that hadn't already been said better by someone else.


Alisa at 3:54 PM on October 26, 2016 | #7062 | reply | quote

> none of those are systems of ideas.

Ok.

> none of those are concrete.

Ok.

I still don't get what you think I would benefit from spending my time on more than FI.


Alisa at 3:56 PM on October 26, 2016 | #7063 | reply | quote

> I still don't get what you think I would benefit from spending my time on more than FI.

you aren't spending much time on FI and wrote this thing b/c you were *interested* and pursuing philosophical interests is important.


Anonymous at 4:00 PM on October 26, 2016 | #7064 | reply | quote

> you aren't spending much time on FI and wrote this thing b/c you were *interested* and pursuing philosophical interests is important.

I wasn't particularly interested in learning when I started writing the HBL post. I was just interested in correcting HB's mistake. I think my posts on playing video games and other threads I started are more relevant to my actual interests.


Alisa at 4:28 PM on October 26, 2016 | #7065 | reply | quote

Clarifying Popper (not a skeptic!)

I'd like to resolve this disagreement between Critical Rationalism and Objectivism. I think it's an unnecessary result of confusion and misunderstanding. The dispute prevents Objectivists from learning Karl Popper's valuable epistemological breakthrough. And if I have misunderstood something, I'd like to find out.

My view: Popper offers a system of thought which breaks with over 2000 years of epistemological tradition and solves the problem of induction. Popper did not work out every detail correctly, and adjusted his arguments over his lifetime. He originated the major breakthrough. David Deutsch and I have refined the breakthrough.

Popper's epistemology ideas are counterintuitive and complicated. (I can tell you that knowledge is created by evolution, but you won't understand what I mean by that yet.) Popper is a worse communicator than Rand. _The Logic of Scientific Discovery_ is an earlier book which is harder to understand. Popper was aware of communication difficulties[1].

Writers always have choices about style, what to emphasize, what to focus attention on, and which possible objections to address. Popper makes different choices than Ayn Rand. This perspective difference, along with Popper's bad politics, has led to mutual misunderstanding.

Popper emphasizes anti-infallibilism and anti-authoritarianism. Objectivism emphasizes anti-skepticism and anti-relativism. Here's one of the results: Popper talks about "conjectural knowledge". Objectivism talks about "certain knowledge". *These are the same thing* with a different emphasis.

Popper uses the qualifier "conjectural" to deny infallibility. Objectivism uses the qualifier "certain" to deny skepticism (and to differentiate certain/probable/possible knowledge, which I have a criticism of).

Popper's choices about how to present his philosophy are most informed by his goal of presenting his solution to the problem of induction. He optimizes for that. Emphasizing fallibility helps there.

What if someone hears "certain" and thinks Objectivism is infallibilist? It happens. But Objectivism chooses to prioritize communicating its rejection of skepticism. And if someone reads more, they can find out that Objectivism also accepts fallibility.

What if someone thinks Popper is a skeptic? It happens. But Popper chooses to prioritize communicating his rejection of infallible knowledge. And if someone reads more, they can find out that Popper also rejects skepticism. This is the first paragraph of the preface of his _Conjectures and Refutations_:

> The essays and lectures of which this book is composed are variations upon one very simple theme—the thesis that *we can learn from our mistakes*. They develop **a theory of knowledge** and of its growth. It is a theory of **reason** that assigns to rational arguments the modest and yet important role of criticizing our often mistaken attempts to solve our problems. And it is a theory of **experience** that assigns to our observations the equally modest and almost equally important role of tests which may help us in the discovery of our mistakes. Though it **stresses our fallibility** it **does not resign itself to scepticism**, for it also stresses the fact that **knowledge can grow**, and that **science can progress**—just because **we can learn** from our mistakes. [bold added]

And in chapter 1:

> I think that those who put the problem of induction in terms of the *reasonableness* of our beliefs are perfectly right if they are dissatisfied with a Humean, or post-Humean, sceptical despair of reason.

I'll footnote more information about skepticism[2].

The standard definition of knowledge is "justified, true belief." Popper says this is infallibilist because philosophers only count omniscient, final truth as "true" here. Actually fallible ideas can constitute knowledge (I agree). He talks about "conjectural knowledge" to differentiate it from the infallibilist "justified, true belief" conception. Objectivism makes a similar point when it says omniscience is not the standard of knowledge (which means that non-omniscient, fallible ideas can constitute knowledge.)

Misunderstandings go both ways. Not many people have the patience and interest to carefully study a large body of work which initially seems wrong to them. It's pretty easy to find something that sounds wrong to you and reject it because contexts vary (and often you'll be right – the large majority of philosophers really are bad). I've studied both perspectives and can translate between them.

So I'll explain what Popper meant by some of the _The Logic of Scientific Discovery_ quotes Harry Binswanger (HB) objects to, as well as my take on how HB reads them. (I can do them all if we decide that's the most productive way to proceed. Please be patient. There's a lot to say, like Popper's position on induction, and it can't all come first.) I'll try to use this as an opportunity to talk about ideas; it's better to focus on the ideas more than the people.

> ... the various difficulties of inductive logic here sketched are insurmountable. [p. 29]

HB's reading: Popper says induction is impossible. That implies knowledge is impossible, so Popper's a skeptic.

Popper's point: the problem of induction, as traditionally conceived, is insoluble by a direct approach. It can only be solved by questioning its premises. (Popper came up with a solution to the underlying problem, "How do we get knowledge?" He rejected the false dichotomy that it's either induction or nothing. Popper's solution majorly differs from induction and should be called non-inductive.)

> ... there is no such thing as a logical method of having new ideas, or a logical reconstruction of this process. My view may be expressed by saying that every discovery contains 'an irrational element', or 'a creative intuition' ... [p. 32]

HB's reading: Popper is an open irrationalist who denies that some methods of thinking are better than others.

The wording using "irrational" is atrocious writing. But Popper's point is:

Knowledge is created by an evolutionary method using guesses and criticism. There is no formal or logical process for brainstorming guesses. There's no recipe to follow, step by step, to get to "Eureka!" Creative intuition is involved. Rather than try to avoid bad ideas using logical rules governing brainstorming, bad ideas are addressed by criticism.

> Thus anyone who envisages a system of absolutely certain, irrevocably true statements as the end and purpose of science will certainly reject the proposals I shall make here. [p. 37]

HB's reading: Popper rejects the capability of science to acquire knowledge.

Popper's point: fallibilism.

Every time Popper talks about certainty, he's thinking of infallibility. When Popper rejects "absolutely certain" ideas, he's saying humans are fallible and science does not offer omniscience.

When Popper rejects "irrevocably true statements," he's saying that in a future context we may get new information and change our minds. Popper thinks of this in terms of fallibility. Whatever we do, we may have made a mistake. Popper also thinks of revising ideas when we get new information in terms of correcting mistakes. Popper is unaware of the Objectivist perspective that older ideas remain contextually true. Popper's point is our ideas are never final; further thinking, progress and improvement are always possible.

---

**Footnotes:**

[1] Regarding communication difficulties:

_Realism and the Aim of Science_, ch. 1, sec. 1:

> ... if people think on inductive lines—and who does not?—then a remark like ‘I do not believe in induction’ can hardly be interpreted in any other sense than ‘I do not believe in science’. Nor do I think that I should have conveyed my meaning better had I begun, say, with the words, ‘I believe in the greatness of science, but I do not believe that the methods or procedures of science are inductive in any sense’.

Then in ch. 1, sec. 2, Popper talks more about why and how he's been misunderstood as a skeptic, irrationalist and relativist. He had some ideas about how to explain himself better, and I think _Realism and the Aim of Science_ does explain better than _The Logic of Scientific Discovery_.

[2] Regarding skepticism:

_Realism and the Aim of Science_, ch. 1, sec. 2:

> ... even though I offer a negative solution to the classical problem of justification, resembling in this respect the sceptics and irrationalists, at the same time I dethrone the classical problem and replace it by a new central problem which allows of a solution that is neither sceptical nor irrationalist.

and later:

> ... *all philosophies so far have been justificationist philosophies*, in the sense that all assumed that it was the *prima facie* task of the theory of knowledge to show that, and how, we can *justify* our theories or beliefs. Not only the rationalists and the empiricists and the Kantians shared this assumption but also the sceptics and the irrationalists. The sceptics, compelled to admit that we cannot justify our theories or beliefs, declare the bankruptcy of the search for knowledge; while the irrationalists (for example the fideists), owing to the same fundamental admission, declare the bankruptcy of the search for reasons—that is, for rationally valid arguments—and try to justify our knowledge, or rather, our beliefs, by appealing to authority, such as the authority of irrational sources. Both assume that the question of justification, or of the existence of positive reasons, is fundamental: both are classical justificationists.

Popper rejects justificationism and has a different approach to the pursuit of knowledge which focuses on criticism. If you take a bunch of ideas and correct some errors using criticism, now you have better ideas. This is the evolutionary growth of knowledge, but it does not justify ideas as either *infallibly true* or *probably infallibly true*.

_Conjectures and Refutations_, ch. 10, sec. IX:

> Verificationists, I admit, are eager to uphold the most important tradition of rationalism—the fight of reason against superstition and arbitrary authority. For they demand that we should accept a belief *only if it can be justified by positive evidence*; that is to say, *shown* to be true, or, at least, to be highly probable. In other words, they demand that we should accept a belief only if it can be *verified*, or probabilistically *confirmed*.

>

> Falsificationists (the group of fallibilists to which I belong) believe—as most irrationalists also believe—that they have discovered logical arguments which show that the programme of the first group cannot be carried out: that we can never give positive reasons which justify the belief that a theory is true. But, unlike irrationalists, we falsificationists believe that we have also discovered a way to realize the old ideal of distinguishing rational science from various forms of superstition, in spite of the breakdown of the original inductivist or justificationist programme.

(When Popper says one can't "justify the belief that a theory is true", he's talking about the infallibilist sense of "true" from "knowledge is justified, true belief" which requires an idea to be true in *all contexts* to qualify as knowledge.)

Popper goes on to talk about his critical approach.


curi at 7:03 PM on October 26, 2016 | #7085 | reply | quote

Szasz is opposed to obscuring moral and legal issues with pseudo-medical babble.

> Mental illness, including psychosis, is real. Either Szasz is denying that insanity is fundamentally different from sanity, or he’s playing word-games (but note that the title of his book is The Myth of Mental Illness).

Szasz is not playing word games. He is pointing out that what is commonly called mental illness is behaviour, not an illness. An illness is a structural or chemical abnormality in the body, behaviour is not such an abnormality. If you look at diagnostic criteria for mental illness, they typically exclude drugs and medical conditions as a cause and only discuss behaviour, e.g.

http://www.mental-health-today…..ep/dsm.htm

Pretending that behaviour is illness obscures moral and legal issues. This is very dangerous.

HB approvingly quotes Branden and claims he is clear:

> The standard of mental health—of biologically appropriate mental functioning—is the same as that of physical health: man’s survival and well-being. A mind is healthy to the extent that its method of functioning is such as to provide man with the control over reality that the support and furtherance of his life require. . . .

> The proper function of consciousness is: perception, cognition, and the control of action.

> An unobstructed consciousness, an integrated consciousness, a thinking consciousness, is a healthy consciousness. A blocked consciousness, an evading consciousness, a consciousness torn by conflict and divided against itself, a consciousness disintegrated by fear or immobilized by depression, a consciousness dissociated from reality, is an unhealthy consciousness.

Whether somebody exercises self-control, integrates his ideas and so on is a moral issue, not a medical issue. Branden obscures that truth in this passage by lots of pseudo-medical jargon. This passage illustrates that Szasz is right about the dangers of mental illness talk.

HB continues:

> Also, Szasz is against involuntary institutionalization of the insane. But institutionalization is clearly necessary when the person poses an objective threat.

How would you know a person is an objective threat unless he makes credible threats to commit a criminal act? And if he does that, we already have institutions for dealing with such people: police, courts and prisons.

If you can’t get a criminal conviction against a person, you ought not to be able to punish him. By any objective standard, imprisonment and forced drugging is punishment. Inflicting punishment without a trial is a threat to the rule of law.

Also, a lot of psychiatric treatment is enforced against people who have committed no crime, e.g., people who threaten or attempt suicide. Also, people who are inconvenient for their relatives often end up in mental hospitals. See Szasz’s description of the case of Rose Kennedy in “Coercion as cure” by Szasz. Rose Kennedy was lobotimised because Joseph Kennedy feared her gregarious behaviour might embarrass him.

HB continues:

>And if a person is insane, he is not competent to exercise his own rights, so he may be institutionalized for his own protection and that of the sane.

If a person is capable of expressing a preference, then he is capable of exercising his rights. You may disagree with him and criticize the way he uses his rights. But if you go beyond that into punishing him without a trial, you have broken the rule of law.

HB writes:

>(Yes, you can raise questions about the legal criteria for declaring a person insane, and the criteria may indeed be too lax, or too strict, in a given state–that’s a red herring.)

There are no objective standards at all. It’s not a matter of laxness. Psychiatry is the rule of those who have pull. If you can lock somebody up or drug him against his will without convicting him of a criminal act, then what is the objective standard by which you imprison and torture him?

HB writes:

>I blame Szasz for the surge of psychotics on the streets of New York City and, I assume, elsewhere–his work spread the idea that these unfortunate people are better off “in the community”–i.e., sleeping on the sidewalks, accosting passersby, creating health risks and setting fires.

Then you’re blaming the wrong person. Szasz did not recommend forcing patients to leave mental hospitals. See Chapter 19 of his book Law, Liberty and Psychiatry, in which he recommended that people should be offered advice and help if they want it, and left to go free if they don’t. See in particular section 2 of the “Long-Range Goals” section of the chapter. He explicitly condemned turning people out of mental hospitals, see “Cruel Compassion” Chapter 9 -11.


alan should put his posts here at 11:24 PM on October 26, 2016 | #7092 | reply | quote

Szasz answered these arguments already.

HB wrote:

> I agree with David Wilens, based on the report of an (Objectivist) father whose son suffered from it in grade school. He told me that the boy experienced dramatic relief from the problem twenty minutes after trying a dose of the drug. It was, I think, Ritalin. The boy himself was very pleased with the effect, and he continued to take it for several years, with dramatic improvement in his ability to concentrate on his school work.

There could be lots of reasons for Ritalin having that effect that have nothing to do with illness. The person taking taking the Ritalin could expect to concentrate better, and do so as a result. The person taking the Ritalin might feel sensations that he interprets as helping him concentrate more, e.g. – he might feel less tired. The person might dislike Ritalin, and think that if he doesn’t shape up he might be forced to take something even worse. In the absence of a specification of the cause of ADHD and the effect of Ritalin there is no way to tell whether it is treating an illness.

Byron Price wrote:

> I think mental illness does exist. The cause may not be known or may be very subtle. A couple of neurons that don’t fire properly could cause problems in the right location. There would be little chance of finding those defective neurons; all you would see is the symptom. Severe stress will cause psychic trauma. If it continues long enough it probably causes structural changes in the brain.

This is super vague. Not sitting still at school can’t be a result of a couple of neurons misfiring. Whether you do X or Y is a result of your judgement about the best thing to do. That judgement can’t be a result of just two neurons firing since it takes into account stuff like what role other people expect you to play, your perception of the value of sitting still or moving etc. Also, mental illness is specified in terms of behaviour not brain stuff, as I explained previously.

John Bales wrote:

> One may suffer from anxiety and depression yet, from force of will, continue to behave rationally while silently suffering. Depression does not have to result from a physical abnormality or “chemical imbalance” but can be the result of repression. Because of the nature of repression, one may need assistance in discovering the nature of the repression and assistance in dealing rationally with whatever is being repressed. This is what a therapist does.

I suspect that Szasz has never himself personally experienced clinical depression or anxiety. Had he done so, he would have known that he was not well.

From the description of Szasz’s book “The Ethics of Psychotherapy” on Amazon:

> In this book, I propose to describe psychotherapy as a social action, not as healing. So conceived, psychoanalytic treatment is characterized by its aim–to increase the patient’s knowledge of himself and others and hence his freedom of choice in the conduct of his life; by its method–the analysis of communications, rules, and games; and lastly, by its social context–a contractual, rather than a ‘therapeutic, ‘ relationship between analyst and analysand.

Also note the use of mental illness as a smear tactic so John doesn’t have to deal with Szasz’s arguments.

HB wrote:

> Nor should we be worried about “drugging our kids” with these medications. It’s my understanding (and please correct me if I’m wrong) that for children with no actual neural problem, Ritalin and the like have no effect. It is not a sedative.

Would you object to being forced under threat of physical violence to eat a Tic Tac? It’s just one little sweet, and it won’t do you any harm, so why would you object?


alan should put his posts here at 11:25 PM on October 26, 2016 | #7093 | reply | quote

replying to a confused guy

One-line summary: Alan didn’t say you smeared Szasz.

Alan said you used mental illness as a smear tactic.

You did not call Szasz mentally ill. So Alan was not saying you smeared Szasz.

What you did do is assume that people *labelled* “mentally ill” are wrong/bad/irrational/repressed/etc. Sometimes they are actually just fine, happy, rational, etc. Sometimes they just want to be left alone to live their lives.

(You also personalized the discussion and tried to attack Szasz’s credentials rather than focus on arguing about ideas. Having a particular experience is a type of credential. And btw the belief that people who don’t have certain credentials cannot understand an issue is an attack on the power of reason. It’s also kinda bizarre because large numbers of people have never been “depressed” but nevertheless agree with you.)

When someone is labelled mentally ill, all you really know is the labeler (or the person paying the labeler) disapproves of him.

You should be careful taking sides in conflicts between people without additional information. Knowing one side made an accusation against the other side (“he is mentally ill”) is not adequate to pass judgement.

Sometimes people labelled “mentally ill” agree they have a problem and want help. Sometimes not. One thing we ought to be able to agree on, to get started, is the importance of voluntary interactions between people. Will you clarify and announce that you wish nothing to be done by force to “mentally ill” people who don’t want it done to them? (Outside of the regular criminal justice system. Force may be used against criminals even if they are “mentally ill” and even if they don’t want it.)


curi at 11:46 PM on October 26, 2016 | #7094 | reply | quote

:(

One-line summary: I don’t understand the problem with dollar signs.

The “Mental Illness” essay linked is a PDF which works on all platforms like PC and Mac. I didn’t realize the text mentioning iOS would be confusing, thanks for the feedback.

If paying $1 is a problem for you, you can email me and ask for a free copy.

The essay link comes immediately after an also-unlabelled-as-costing-money-link that costs $10 which you are not complaining about. I don’t see the problem and was not expecting people here to have a problem with a dollar sign and demand warning labels for them.

---

links are:

my essay: https://gumroad.com/l/ezayH

$10: https://www.amazon.com/Myth-Mental-Illness-Foundations-Personal-ebook/dp/B004V54ENO/?tag=curi04-20


curi at 8:14 AM on October 27, 2016 | #7099 | reply | quote

People on HBL complained over having to pay for your essay?


Anonymous at 2:48 AM on October 28, 2016 | #7104 | reply | quote

> They are brain tumors or Alzheimer’s. Those are physical illnesses.

If they affect brain functioning, won't they reveal themselves as "mental illness"? Meaning, won't people's behaviour not be fully rational as they are not able to process information properly?

Why can't software damage exist?

It seems you focus on the presence of physical damage but until recently, while the people were living, it was not possible to spot any on people with Alzheimer. So how could you tell they were ill for real and not just acting badly out of their own choice?

HB is very pro mental-illness and hasn't even made an argument. He calls Szazs evil and yet uses an article from Branden of all people against him.

HBL is making the case for Ritalin thinking diagnosing your kids a mental illness is completely harmless, they just get the right medication to treat them and then carry on free and happy concentrating at school (and why aren't you guys arguing against this idea of concentrating at school being desirable?)

Maybe you can link to cases of children and young adults diagnosed with Autism in the UK taken from their families against their will and put in institutions, far away from home. In one of those institutions it was found that the staff physically abused the patients.


Anonymous at 3:21 AM on October 28, 2016 | #7105 | reply | quote

more Popper misunderstandings

quoting HB:

> To say “we may have made a mistake” is to make an assertion that requires justification. There is none. The assertion is arbitrary. Once the proper logical procedures have been carefully applied, and checked, *there are no grounds for doubt*.

You're misunderstanding the context and purpose of the statement.

"We may have made a mistake" is true, *as far as it goes*. We may have. It happens. There are no procedures for thinking which offer a 100% guarantee against mistakes. Don't reject a true statement just because some people misuse or misunderstand it.

"We may have made a mistake" doesn't go very far. It **is not grounds for doubt** and does not include any statements about doubting anything. It's good for denying infallibilism, and that's about it. It's basic and the thing to do is accept it and move on.

Don't reject "we may have made a mistake". Reject the *false implication*: "we may have made a mistake, *therefore* this claim is in doubt". That's wrong.

If someone says "we may have made a mistake" for the *purpose* of casting doubt, *then* he's mistaken.

Ruling out any possibility of having made a mistake is **not the standard of knowledge**. Popper knows this. He doesn't claim that an idea's fallibility renders it non-knowledge. He doesn't use *maybes* to cast doubt.

> (The fact that we have erred in other cases in the past does not per se provide evidence of error in a new and different case.) Such doubts are arbitrary and, as such, are to be dismissed, not entertained as a possibility.

Popper doesn't advocate this sort of doubting.

It's important to differentiate between 1) evidence of error (reasons to doubt this particular claim) and 2) evidence that error is possible in general, including in this case (reasons to be a fallibilist instead of an infallibilist).

The technical possibility of an error (an error wouldn't violate the laws of physics and there are no guarantees against it) is not a *criticism*.

You already know this. I already know this. Popper already knows this. It's pretty basic. Can we move on?

What would it take for you to become interested in Popper's positive achievement? I'm fine with discussing these misreadings if you want to keep bringing them up, but you won't learn about the positive value of Popper this way.

**Further Replies of Lesser Importance**

> (This is not to say that it is impossible that one has erred: “not possible” does not mean “impossible.”)

Right, in Objectivist terminology. I understand what you mean.

Popper's and my terminology, which I think is standard, is different. Basically anything that doesn't violate the laws of physics, and can't be ruled out, is "possible". But then saying something is "possible" doesn't say much and doesn't mean it's worth our attention. What's possible only comes up for discussing certain issues like the laws or physics. Or if someone claims something is *literally impossible* then pointing out it's possible is a rebuttal.

> Elliot Temple says that Popper’s enemy is, in effect, dogmatism. But dogmatism vs. skepticism is a false alternative.

Right, but Popper isn't a skeptic. Popper says at every turn that we *do* know things. He says some traditional ideas about *how* we know (e.g. induction) are mistaken, and offers a new solution to how we do know.

> The impossibility of knowing a universal, “All S is P,” is Popper’s core idea.

That isn't Popper's idea.

Popper's core idea is that knowledge is created by conjectures and refutations (which is evolution). This includes knowledge of universals.

> “Falsificationism”

Why is this word in quotes? What Popper wrote about "falsificationism" is that he doesn't call his philosophy by that name. (_Realism and the Aim of Science_, Introduction sec. IV)

Calling Popper's philosophy "falsificationism" is like calling Objectivism "Randism" (which Rand didn't call it and didn't want it called).

The word "falsificationism" does not appear even once in _The Logic of Scientific Discovery_, _Conjectures and Refutations_ or _Objective Knowledge_.

And now quoting David Odden:

> One of [Popper's] core claims is that you cannot prove an open-ended universal statement

Popper considers the word "prove" to be infallibilist. So he means you cannot *infallibly prove* such a thing. Which is right. What Popper does *not* say, or mean, is that one can't *know* such a thing.

> because that would require inspecting an infinite set of entities to see if the statement is true

Popper is also correct about the basic point that you cannot inspect an infinite set of entities. So you have to do something else. He talks about what else to do (he doesn't give up and think nothing works).

Odden proceeds to bring up the Duhem-Quine problem. But he misstates it. "you can’t tell which it is". No. There is a problem of how to tell which it is (also you can't *infallibly* tell which it is). There's no mechanical, rote way to figure out which it is. It's non-trivial. Popper offers a solution to that problem (to, as usual, the standard of knowledge, but not to the standard of infallibility).

I think part of the confusion with Popper is he sometimes goes over basic stuff (e.g. fallibility is technically true) and then people try to read him as something far more than he is in those passages. Some things Popper says are really simple and that's all there is to them, and it'd be nice if people would just agree and move on to the more interesting parts. (Should he have written some very simple things? I think so because infallibilism is common and also people often try to debate Popper's simple points.)


curi at 11:43 AM on October 28, 2016 | #7108 | reply | quote

Aren't you going to get in trouble for quoting from a private forum, pay only access here?


Anonymous at 12:26 PM on October 28, 2016 | #7109 | reply | quote

Oists seem really intent on making really clearly infallibilist statements, but also saying they are fallibilists

“Maybe You’re Wrong” by Leonord Peikof, The Objectivism Forum, April 1981:

How do you know that you’re not in bed dreaming right now?

> Men are capable of error, Descartes noted as he started to philosophize; they can misinterpret sensory data, commit logical fallacies, suffer from insane delusions, etc. How then, he asked, can I be certain of anything, even of so seemingly obvious a truth as “two plus three equals five”?

This brings up the same basic issue we’ve been discussing.

If “certain” means “infallible” – as Popper would read it – then you can’t be “certain of anything” (infallible about anything). I consider this more standard use of English, but I’m not too concerned about terminology.

And if you told Popper that you defined “certain” to mean “meets the standard of knowledge“, then he would reply that you can be “certain” – that is, you can have knowledge.

The best refutation of solipsism is by the Popperian David Deutsch in his book The Fabric of Reality.

> Human fallibility, on this argument, is incompatible with the possibility of knowledge. “I know” implies: “I can’t be wrong.” Yet “I am fallible,” it is claimed, implies: “I can always be wrong.” Therefore, a fallible being cannot claim knowledge.

Here Peikoff is speaking for Descartes. I’ll argue with it anyway:

“I know” does not imply “I can’t be wrong.” for any normal English meaning of “can’t”. To say you can’t be wrong would require omniscience, which isn’t the standard of knowledge.

The solution is rejecting the infallibilist conception of knowledge. Like ITOE says:

> AR: … When you simply boil water, you do not know that it has molecules, nor what happens to those molecules. When you arrive at that later stage of knowledge, you’ve discovered something about water and the conditions of its boiling which you didn’t know before. And, therefore, within your present context, this is a sufficient explanation, even though it’s not the exclusive and final explanation. To reach that you would have to have omniscience.

“I can’t be wrong” is a a pretty straightforward way of saying one has the “exclusive and final explanation”. But that would require omniscience. Forget about it. One doesn’t need that. There’s no need to say “I can’t be wrong”. Just say that right now there are no non-arbitrary doubts or criticisms of my position, so I will believe it and act on it. That’s the proper standard of knowledge.

Peikoff goes on to say:

> Fallibility does not make knowledge impossible.

Right. Popper says the same thing.

The part where the article disagrees with Popper is where it summarizes the methods of science. Popper says that particular approach to science doesn’t work and actually science works a different way. People have then focused heavily on the first half of the preceding sentence and gotten the wrong idea. (Popper said my conception of science doesn’t work! He must be a skeptic!)

> For instance: if an ignorant child, asked the sum of two plus three, utters a blind guess, his guess can-and likely will-be wrong; this is an expression of human fallibility. But if the child goes on to study the laws of arithmetic, applies them to the present case, and grasps that any answer but “five” would involve a contradiction, that child, in this particular context, can no longer be wrong; by using a validating process, he has erased the earlier possibility of error.

“can no longer be wrong” reads to me like an expression of infallibility, not contextual knowledge.

Why does Peikoff say something that people will read as error being impossible? Why not say that this conclusion is knowledge, arbitrary doubts about it are not a concern, lack of omniscience is not a concern, and as far as non-arbitrary doubts: no one knows of any or has any promising leads or any reason to pursue the matter (but it’s not impossible that one day someone will learn something new and some old mistake will be exposed and reconsidered).

This is one of the things I find problematic about Objectivism. On the one hand there are plenty of fallibilist statements that explain the matter in a good way and which I believe I understand and agree with. But then there are also statements, like this “can no longer be wrong” one, which strike me as contradicting Objectivism’s fallibility.

Oftentimes I can read some qualifier, like “contextually”, as implied. When I see an Objectivist write, “X is certain” I’ll generally read him as meaning “X is contextually certain”. That’s fine. It’s not necessary to write the long version all the time. We always omit many things when we write and try to focus on what’s important to our point. The point the person is talking about is the value of X and our knowledge of X, not about fallibility and contextually and epistemology, and that’s fine.

But here I don’t know how to read Peikoff as meaning something true when he says “can no longer be wrong”. The whole point of making the statement seems to be to express infallibility. And he is discussing epistemology, not something else.

What is the value and purpose of this “can no longer be wrong” claim?

People do study math, think they grasp it, and then get it wrong. Happens. No big deal. Why try to make infallibilist-sounding statements about the matter? Why say “erased the earlier possibility of error” as if fallibility has disappeared? Fallibility is there – it’s just not grounds to doubt or criticize.

I get what I believe is the general concept: if you’ve done nothing to seek the truth, then whatever stuff you come up with is. (Your claims are vulnerable to criticism for poor methods.) But if you have used good methods of figuring stuff out, then a better counter-argument would be needed for doubt. That’s basically the same position Popper has.

If you want the word “certain” to refer to fallible knowledge, fine, no big deal, though I do think you’ll confuse a lot of people, though I do also see what problem you’re trying to solve by doing it. But it doesn’t make sense to me to talk about erasing the possibility of error to discuss to fallible knowledge. Why not say it erases the previous doubts (but not all logically possible doubts – which aren’t a concern or problem but still logically exist)? Then I’d agree. All I can come up with is it’s a bad use of language and somehow it doesn’t actually contradict fallibility or Popper, or it’s wrong.

But anyway besides some points like this, yes I agree with what I believe is the general point of the article – and so would Popper.


curi at 12:47 PM on October 28, 2016 | #7110 | reply | quote

> Aren't you going to get in trouble for quoting from a private forum, pay only access here?

no. as always, i'm allowed to quote myself, and fair use (little snippets) of others.

i own what i write.


curi at 12:52 PM on October 28, 2016 | #7111 | reply | quote

Yes, you own what you write. But the little snippets of others you do not own and are not public. Why do you think it counts as fair use?


Anonymous at 12:58 PM on October 28, 2016 | #7112 | reply | quote

cuz i've read what fair use is. i also read HBL's policies. you aren't giving any kind of argument or demonstrating any knowledge of the matter. lame.


Anonymous at 1:04 PM on October 28, 2016 | #7113 | reply | quote

psychiatry vs liberty

> The legal question is entirely separate.

Creating pseudo-medical (and thereby pseudo-scientific) excuses for making legal exceptions is one of the major purposes of psychiatry. This plays a large role in our society.

Many behaviors, lifestyles, ideas, etc, are unwanted or disapproved of, but are not illegal. Many times people want to control others, but it's difficult to come up with a good justification. What are illiberal people to do about that? Psychiatry is one of the major tools of illiberalism in our culture which helps them control and force others.

> Whether one calls it “mental illness” or “psychosis” or just “madness” doesn’t change the fact that people in this condition, which is a horrible tragedy, do not have the same rights as the sane do.

No one should be deprived of rights outside of the criminal justice system. This is a key point about the rule of law.

> A sane person cannot be “forced for his own good”; an insane person can be.

Previously you brought up using force against objective threats. Defense is great – as long as it's done according to the rule of law, not outside the rule of law.

But here you talk about using force supposedly for someone's own good. You're talking about using force for a reason *other than* because someone poses a threat. So I take you to be advocating the initiation of force in cases where you claim it helps the person being forced. I don't see a reasonable alternative reading. I do not consider that position compatible with Objectivism.

In a liberal society, what do you do when someone has problems? You can offer voluntary, consensual help (including persuasive arguments or paid services). Or can you leave them alone. But you may not start controlling their life, by force, supposedly for their own good.

*Psychiatry is an attack on liberalism*. ("Therapy" and psychology have milder problems.)

*The purpose of psychiatry and its rhetoric is to justify deviations from liberalism.* Psychiatry excuses the use of force outside the rule of law. That's the essence of what it's about, both historically and today. Its positions are not derived from philosophical or scientific knowledge of the mind – those are excuses which are tacked on to predetermined conclusions.

> But one can’t argue from the existence of abuses to some kind of conclusion that no one should be institutionalized against his will. Isn’t that what Szasz holds?

Szasz says no one should be imprisoned outside of the rule of law, whether you call the building a "prison" or a "hospital".

If you don't commit a crime, and you aren't a threat, then you shouldn't be institutionalized against your will.

> Whether we call it “mental illness” or “neurosis” or “psychological problem” is secondary.

Making this a medical-scientific issue plays a large part in the prestige, authority and acceptance of psychiatry which helps it get special powers outside the rule of law. So it's a big deal.

I do agree it's secondary *in philosophy*.

> The primary issue is recognizing or not recognizing the existence in many people of automatized contents and methods that are anti-life. ...

> One view, which may or may not be that of Szasz, is that there is no objective standard of proper and improper automatized content and methods–who are we, in effect, to judge the rightness or wrongness of ideas and thinking methods. That’s subjectivism.

That's not Szasz's view. He would say something like: Bad thinking (automatized or not) causes many big problems in people's lives. And they can seek help on a contractual basis. None of this is any reason to deprive someone of liberty.

> What difference would it make if instead of calling it mental “illness” we called it “neurosis”? Would that make Szasz happy? I doubt it.

If people stopped believing "mental illness" was an illness, or any sort of medical issue, that would be a major improvement. It would undermine the supposed scientific justifications of psychiatry and make it easier to see more stuff as *moral conflict*, disagreements, automatized bad thinking, memes, etc. This reconceptualization of the issue would be a positive and significant step.

> A man who has psychological problems is a conscious being; his cognitive faculty is hampered, burdened, slowed down, but not destroyed. A neurotic is not a psychotic. Only a psychotic is presumed to suffer from a total break with reality and to have no control over his actions or the operations of his consciousness (and even this is not always true). A neurotic retains the ability to perceive reality, and to control his consciousness and his actions (this control is merely more difficult for him than for a healthy person). So long as he is not psychotic, this is the control that a man cannot lose and must not abdicate.

> Would Szasz agree that psychosis is real?

People do things which are called "psychotic". Psychiatry's claims about what's going on there, in the mind, are largely false – and largely different than Objectivism's take on automatized bad ideas.

What is a "total break with reality"? How do you know when someone has one? The flaws become more apparent if you get into details.

But what Szasz and I care most about is liberalism: are you proposing the initiation of force? If so, exactly when and by who against who? That's a really big deal. I'll be happy to discuss subtle philosophical issues about the mind *after* we agree on liberty. The case for liberty does not depend on those details.

If you aren't trying to justify force then it's just a matter of people offering services – such as advice and drugs – on a voluntary-contractual basis to people who choose to buy those services. Szasz and I would still think some services are bad to pursue, just like I'd recommend against hiring a fortune teller, but that's less of a big deal. First let's be clear about the liberty issue.

> The one area I agree with is that in times past horrible things were done to asylum patients–lobotomies and electroconvulsive shock.

You're mistaken because our culture is intentionally misleading about this. Lots of people have that impression. Actually ECT, lobotomies and some other horrors are done in the present.

https://healthimpactnews.com/2013/lobotomy-returns-under-a-kinder-gentler-new-name/

https://www.washingtonpost.com/national/health-science/fda-electroshock-has-risks-but-is-useful-to-combat-severe-depression/2016/07/18/4a109cbc-2f4e-11e6-9de3-6e6e7a14000c_story.html

http://www.mayoclinic.org/tests-procedures/electroconvulsive-therapy/basics/definition/prc-20014161

> As to drugs, some are wonderful. From the little I know, the lithium family of drugs, at least, brought sanity or near-sanity to many, many sufferers.

How do you know? Some particular study you think is correct? Anecdote? Lots of anecdotes spreading in the culture to the point it's something everyone thinks they know? An explanation of the causal mechanisms involved? Some authorities saying so? (I have the same questions regarding Ritalin.)

In my experience reading studies of this type, they either have glaring errors or the conclusions fall far short of the typical beliefs our culture has about how wonderful and effective these drugs are.


curi at 2:07 PM on October 28, 2016 | #7115 | reply | quote

Is the problem on this issue that human problems are called "mental illness" or that the government is involved?

Many people with "mental illness" want to believe they have a "mental illness". Don't they have the right to? They will want to go to doctors. The same way the people who go to fortune tellers believe fortunes can be read by fortune tellers and etc.


Anonymous at 3:13 PM on October 28, 2016 | #7125 | reply | quote

talking to alan about wording

One-line summary: Don’t hedge with maybes.

> It’s common for parents and other adults to inflict escalating punishments on children until they comply. The psychiatrist could be viewed as just another adult inflicting just another set of punishments. If the child claims to like the Ritalin he might get off easier than if he claims not to like it.

This is poor wording because of the could, if, and might. It’s defensive writing that limits its claims to limit the possibility of criticism.

It’d be better to write the point like this:

> It’s common for parents and other adults to inflict escalating punishments on children until they comply. A psychiatrist is another adult inflicting a set of punishments. When the child claims to like the Ritalin, he tends to gets punished less than if he claims not to like it.

This changes a “could” to an “is”, which is important. You are trying to make a claim about reality (what is), not about what narratives someone could imagine.

It changes “if” to “when”. This is more stylistic. “If X then Y” is asserting something and you can compare the content to “When X, Y”. But it sounds more like you’re saying something is the case, instead of hedging with qualifiers, when you say it with “when”. And, in our culture, people are bad at reading and understanding ifs.

I changed “might” to “tends”. You wanted “might” because there are exceptional cases. But “might” is too vague. Anything “might” happen. “Tends” makes it clearer you’re trying to talk about a causal relationship: the child saying he likes Ritalin actually does something to reduce punishments. (It’s never 100% effective, and there’s exceptions where it’s counter-productive, but that’s OK.)

“Tends” is still a pretty vague word (it doesn’t specify when/why it does or doesn’t work) but it’s less vague than “might”.

I also cleaned up the wording a bit for clarity and brevity.


curi at 3:22 PM on October 28, 2016 | #7126 | reply | quote

> Is the problem on this issue that human problems are called "mental illness" or that the government is involved?

both

> Many people with "mental illness" want to believe they have a "mental illness". Don't they have the right to?

sure. people have the right to believe they are Jesus, too. so what?

> They will want to go to doctors. The same way the people who go to fortune tellers believe fortunes can be read by fortune tellers and etc.

so what?


Anonymous at 3:24 PM on October 28, 2016 | #7127 | reply | quote

> so what?

People will want the idea of mental illness to be taken seriously by society. Not like the right to believe you are Jesus, which is not taken seriously.

If "mental illness" is a fraud the government should recognize it as a fraud. This would make it impossible for people who believe they are "mentally ill" to get medical help.


Anonymous at 4:01 PM on October 28, 2016 | #7136 | reply | quote

@#7136 you're incoherent. i think you're assuming a ton of unstated premises.


Anonymous at 4:02 PM on October 28, 2016 | #7138 | reply | quote

HB made an interesting point in post 13773. No reply from Elliot or Alan to that. Why?


Anonymous at 4:11 PM on October 28, 2016 | #7140 | reply | quote

@#7140

FYI, next time state the thread or i won't go find it. i don't know any way to easily get to HBL posts by post number.

you say there's an interesting point but you don't say which point you think is interesting or why.

next time you post, please write some content instead of no content.


curi at 4:13 PM on October 28, 2016 | #7141 | reply | quote

Error is physically and logically possible.

Terminology:

Possible: Does not violate the laws of physics or logic.

Certain: Impossible to be mistaken. (A mistake would violate the laws of physics or logic.)

Couldn’t: The phrase “X couldn’t be mistaken” means “It’s impossible for X to be mistaken.”

Let’s look at the example where you study math and then do several math problems. They can be something simple, e.g. addition problems. You cannot be “certain” of your solutions because you may have made a calculation error. Even if you double check. Even if you quadruple check, no law of physics or logic would be violated by an error. People do make addition errors, and there is no physical process which literally 100% reliably does addition with errors being impossible.

You can try to get around this in a discussion by adding premises. You can say things like, “If there was no calculation error, then my solutions couldn’t be mistaken.” But that is not a real life situation. There’s no 100% reliable way to establish you didn’t make a calculation error that you can use in reality.

This is basic and not a big deal, except that people are resistant to it and argue with it because they jump to conclusions about what it means and they don’t like those conclusions (often with good reason).

Contextual certainty does not address this issue. You may try to say, “Within the limits of current knowledge of math, I am certain I calculated these problems correctly. In a future context it’s conceivable we’ll know something new about math, but I can’t worry about that.”

I agree with the attitude of not worry about the non-specific possibility of learning something new in the future. “We might learn something contrary in the future” is no reason to doubt anything.

However, it remains possible (does not violate any law of physics or logic) that you could have made a calculation error that is an error according to your knowledge today (an error according to your current context).

That’s OK. Infallibilism isn’t needed anyway.

The (physical and logical) possibility of error is no reason to hesitate, doubt, or be a skeptic. It comes up sometimes in some epistemological arguments. It’s not a huge, enlightening point, but nor should it be denied since it’s true. It’s important to get the basics right and work from there without jumping to conclusions about their implications.

With this said, Peikoff wrote:

> For instance: if an ignorant child, asked the sum of two plus three, utters a blind guess, his guess can-and likely will-be wrong; this is an expression of human fallibility. But if the child goes on to study the laws of arithmetic, applies them to the present case, and grasps that any answer but “five” would involve a contradiction, that child, in this particular context, can no longer be wrong; by using a validating process, he has erased the earlier possibility of error.

I cannot agree with the “can no longer be wrong” and “erased the earlier possibility of error”. I don’t see how to read it other than infallibilism which contradicts the basic facts of reality I’ve outlined in my post.

I’d like to move on to talking about how knowledge is created, but I don’t know how to do that while basic premises are denied or misunderstood (because, it seems to me, people objective to false conclusions that are not actually where I was going with the premises).


curi at 12:26 PM on October 29, 2016 | #7203 | reply | quote

Evolutionary epistemology.

The important core of Popperian epistemology is not about psychology or motivations. It’s about logic and processes for dealing with ideas and their results. I wouldn’t suggest worrying about the rest until later (and Popper is a mixed thinker. The epistemology core is his achievement, not any comments he may have also made about motivation.)

There is a fundamental problem in epistemology: how can knowledge be gotten from non-knowledge?

There is only one answer that has ever been thought of, by anyone, which makes any sense. That answer is the theory of evolution. Knowledge can be created by replication with variation and selection.

This was originally used to understand the development of genetic knowledge in animals (e.g. eyes have knowledge of optics).

This requires understanding the term “knowledge”. Knowledge is not “justified, true belief”. Knowledge is solutions to problems. This is distinct from “information”. Information is any kind of data, as useless as it may be. Information is basically anything that can be measured. Information is a technical term in physics and computer science which is often used in imprecise ways from by lay people.

If you point a telescope at the sky and record everything it sees, that’s recording information. But you haven’t learned anything. You may study the resulting data and gain some knowledge, but the raw data is not knowledge.

Eyes are adapted to a purpose. That’s knowledge. It’s one way instead of another in order to address a problem (the problem of sensor design in order to address underlying problems like detecting food and predators, which help address the underlying problem of (approximately) having lots of great grandchildren, which is, for animals, the underlying problem of replication).

Knowledge is hard to change because if you make random changes then it stops solving the problem it was solving before. E.g. most changes to eyes would make them stop seeing. There’s a lot more ways to be wrong than right. So if you’re right (if you have the solution to a problem), most changes ruin it. But if there’s no knowledge in the first place, there’s nothing to ruin. If the stars were in slightly different positions in the sky it wouldn’t matter. The stars were not placed where they are to solve some problem. Their location is just a matter of some initial conditions and then following the laws of physics. Constellations are coincidence, not design. Knowledge is design.

There are many ways to understand knowledge because it’s an important, fundamental concept that has come up in many different fields.

Eyes aren’t perfect. Nothing’s perfect. They still work (usually). There’s no final perfect contextless truth and then done. The growth of knowledge is an ongoing process.

Evolution creates knowledge because it eliminates error. Error detection and correction is the fundamental way new knowledge is created. This contrasts with most approaches to epistemology which emphasis how ideas are created. Evolutionary epistemology allows for brainstorming dumb ideas and fixes the errors in a second step after.

The large majority of genetic mutations make animals worse and are eliminated by selection.

Most ideas people come up with are bad ideas, and are eliminated by criticism while still subconscious.

The quality of ideas that get to the conscious level is much higher because a huge amount of criticism is used to filter ideas out first. The original underlying idea-creation process is not a process of creating good ideas, it’s merely a process of creating ideas. Just like gene mutation is a process of creating new genes, not a process of creating good new genes. This is OK. Random variation plus error correction is adequate to create knowledge. (There are some details, like the variation needs to be pretty small so it isn’t constantly breaking everything and good ideas or traits can be preserved. And there are details about what a “random” variation is which is actually quite tricky.)

This is all a matter of, basically, logic and physics (evolution would not work under some hypothetical different laws of physics). My reason for believing this is how human thinking works is not study of human psychology or the history of science or anything like that. It’s simply that there are no alternatives, not even close.

What alternatives have been proposed for how knowledge is created?

Creationism. Basically, knowledge is created by magic. It just kinda spontaneously appears.

Designer. Misses the point. It sneaks in the supposedly-created knowledge as an attribute of the designer in the premises.

Deduction. Too limited and also too high level. Deduction is performed by thinking people who have some underlying stuff going on that lets them think of what the next step is and check if it follows the rules of deduction or not.

Induction. Has logical problems if taken as a solution to how knowledge is created at low level. Doesn’t actually offer any clear statement of how knowledge can be created from non-knowledge. If you presuppose a thinking person (so it’s not solving the problem of how knowledge can be created) and take induction as more of an inexact high level description of human thought, it’s also wrong for a variety of reasons including its ignorance of the role of error correction.

Abduction. Vague nonsense.

And that’s about it. In all of human history there haven’t been many ideas about how knowledge can be created, and only one, evolution, works.

I assume someone here will wish to debate induction. That’s fine. Please begin by specifying how induction provides a solution to creating new knowledge without presupposing a thinking person using judgement. Also please specify mechanical, rote rules for judging if evidence or argument X supports idea Y (and how much). The procedure must not require any human judgement calls (or else you’re presupposing thinking again and not actually solving the fundamental problem of epistemology).

Don’t jump to conclusions about the implications of these claims. Once the implications are carefully worked out, what you end up with is mostly Objectivism. There are reasons for that in the way Objectivism was designed with a focus on concepts and principles. Most stuff that builds on induction in Objectivism does not build on the specific details of induction. Rather, it builds on the important concepts and conclusions that induction and some other core philosophy is thought to have – there is a real world, we can know it, etc. Atlas Shrugged doesn’t constantly base things on specific claims about repeated observations or the future resembling the past or Solomonoff Induction of Bayes’s theorem or whatever approach to induction you favor. The approach is instead that lots of those details were packed up into concepts, like realism, confidence in one’s mind, etc, and then further steps were built on those integrations. (Ayn Rand didn’t even study the details of induction all that much in my understanding.) And so to replace induction with evolutionary epistemology, all you have to do is work up from it to some of the same integrations, concepts and principles Objectivism uses, like that humans really can know stuff, (that it thought were from induction), and then you’re done and all the rest of Objectivism stands just fine.

Much of this is presented better in David Deutsch’s books (which are better than Popper’s books). If you want a more polished and organized version (and you should), read them. I’m trying to make some comments directed at concerns HBL people have (the books don’t speak to the Objectivist perspective in particular and are considerably longer).


curi at 12:26 PM on October 29, 2016 | #7204 | reply | quote

More on Popper.

> Where is Popper’s influence, then? He and his ideas are widely but shallowly cited in basic scientific education, where I think he is responsible for the extant undercurrent of “you can’t ever know for sure” thinking.

The important thing about Popper isn’t his influence, its the ideas. E.g. looking at epistemology in terms of error correction is a huge insight.

When there’s a notable thinker with some notable ideas, you routinely see the people who study his work aren’t as good.

Most (perhaps all) of Ayn Rand’s students, readers, fans, etc, aren’t as good as Ayn Rand. They aren’t as good at thinking and they don’t understand Objectivism as well as her either. When I talk with Objectivists I typically find they make way less sense than Rand’s books, seem to misunderstand some parts of Objectivism, disagree with some parts of Objectivism due to bad arguments (whether they openly say they disagree or not), etc.

Almost all (perhaps all) the people influenced by Popper are not as good as Popper. They don’t understand his epistemology as well as he did. I’ve read and discussed with a lot of Popperians and they usually just don’t really get Popper and are much, much worse, which I think is unsurprising. (I believe, in this case, that David Deutsch and I have actually improved on Popperian epistemology.)

Science is actually really amazing about this. There are thousands of physicists alive today who understand relativity better than Einstein did. And understand physics overall better than Einstein did, too. (Maybe not every aspect of it, but some major stuff about it.) The situation is even clearer with Newton. Not only have the major breakthroughs in physics been improved on, people have actually learned the original ideas and the improvements and actually are ahead of the old greats.

In science you generally read modern books instead of the originals. And the modern books are better. In philosophy you often read the important thinkers directly because there aren’t a bunch of clearly improved versions of the ideas by later thinkers. We’re soooo far away from the point where you could say, “Don’t bother reading Ayn Rand. Read more recent books which know more and contain all the good ideas from Rand but explained even better.” But you can say something like that about Newton or Einstein — reading them is optional already.

Most fields are not that successful at communication or progress like physics. Philosophy is effectively much younger field than physics. There haven’t been very many philosophers who contributed anything of value. There hasn’t been as much of a progression as in physics. A big reason why is it’s much harder to judge whether a new idea is an improvement or not in philosophy than physics, so the attempts to refine and improve on philosophers often make things worse instead of better. In physics the method of experimental testing, as well as the mathematical precision, help prevent so many new ideas from going backwards like they do in philosophy.

> Mathematicians know better than to disavow all of their proofs as “fallible”, and in general, a properly-executed symbolic proof cannot “fail”

You made “properly-executed” (that is, no mistakes in this context) a premise. Yes, when you assume there are no mistakes (in context) as a premise then fallibility goes away (in context). But in reality there is no infallible way to know you haven’t made any mistakes (in context). So fallibility never goes away. This is harmless, and is important to get right. It may seem pretty trivial but getting the basic premises precisely right does help for dealing with more advanced epistemology.

> When he says things like (LSD p 33) “I never assume that we can argue from the truth of singular statements to the truth of theories. I never assume that by force of ‘verified’ conclusions, theories can be established as ‘true’, or even as ‘probable’,” that leaves me wondering how in the world we ever could establish a theory as true (or ‘true’), and if we cannot, what are we trying to accomplish?

We don’t establish theories as true in the sense of not containing any mistakes, or in other words being infallible (in a specific context or in general).

What we do is learn things and correct errors to improve our knowledge.

When it comes to learning in general, there’s no end points, no stopping points, it’s an ongoing perpetual process. You’re never done learning.

But you can be done learning a particular thing for now. Popper would say you’re “tentatively” done because it’s possible you’ll want to reopen the issue in the future. Objectivists would say you’re “contextually” done — it’s good enough in your current context but in a new context then it could get revisited.

When are you (tentatively) done? Popper’s answer is something like: when you don’t see any further problems (or perhaps only problems that are small enough you’d prefer to move on to work on some other bigger problems regarding something else). I consider this correct but a bit broad — people want more specific, clear, actionable advice.

Put another way, the standard for stopping and accepting an idea (as knowledge) is not that it be perfect or infallible or established as true. The standard is that you think the idea is good enough to accomplish what it’s supposed to and, as a practical matter, you think your time would be better spent elsewhere than on trying to make further improvements to this idea. (You can revisit it if you later change your mind about that. You could run out of other things to do, find it’s not working as well as you expected, have a new idea which has an implication for it, etc.)

The ideas we act on generally aren’t out-of-context truths (an omniscient being would be able to point out some error), and often contain some limited mistakes in context too, but work well enough for life anyway.

People find this too squishy and not proofy enough. It doesn’t satisfy there desire to do stuff like prove and establish as true and verify. It merely lets one lieve on earth and make limitless progress of one’s knowledge. Addressing attitude of craving certainty (which comes in large part from lack of trust of human judgement and wanting something better than human judgement to rely on) is one of the reasons Popper emphasizes fallibilism, and says a lot of other things. Popper, like Rand, doesn’t think human judgement is so weak, bad, unreliable, pathetic, not good enough! We don’t need something fundamentally better than it. We do need to organize our thinking process though and use good methods — hence the evolutionary process of brainstorming and criticism (which has a lot more fine details we can get into. please don’t misunderstand my summary comments as being all there is.)

> He may have had in mind ideas that don’t match what he said or how people have interpreted him, but I have no way of evaluating such conjectures.

If you read all of Popper’s books, and you don’t misunderstand him, then you can learn the things I’m talking about like about evolutionary epistemology. This is a big deal, even if it’s hard, because it’s original and not available anywhere else. (Except from David Deutsch, who got it from Popper. And from me, who got it from Deutsch and Popper.)

Popper’s books also contain some other valuable work including better criticisms of induction and justification than what came before, and some good work on understanding the history of science.


curi at 1:40 PM on October 29, 2016 | #7205 | reply | quote

Misunderstandings.

Chuck Butler, your post focuses on the same fundamental misunderstanding we’ve been discussing. You take Popper’s expressions of the fallibility of knowledge to be skepticism.

The statement, “no number of true test statements would justify the claim that an explanatory universal theory is true” means that they would not justify it as infallibly true (or probably infallibly true). That is, there is no way to ever justify that your idea is what an infallible, omniscient being would agree with.

That’s it. This is just misunderstanding Popper’s meaning (because he uses “true” to refer to ideas with no errors and no possibility of further improvement, which I consider reasonable), not skepticism.

(This is complicated somewhat by whether you’re speaking contextually or not. I went over that issue above.)


curi at 1:47 PM on October 29, 2016 | #7206 | reply | quote

Popperians aren’t skeptics.

I’ve actually had many, many discussions with Popperians. People make all kinds of mistakes (not surprising) but skepticism isn’t one of them. They don’t attack knowledge as such. It doesn’t come up. They don’t spend there time casting non-specific vague doubts on everything. They make somewhat more positive knowledge claims than others (because they’re more of the intellectual type who has read books and thinks, wisely or unwisely, that they know some stuff).

Discussions with Popperians do not consist of people going, “But how can you know anything?” or “What if you just dreamed it?” or “But you can’t prove that.” or “No one really knows anything, all we have are guesses which we choose between by subjective preference, so I choose not to accept your claim.” It’s not like that in practice.

The reason is that Popperians, while thinking they can’t infallibly prove stuff, also don’t think infallible proof is the standard of knowledge. They don’t demand it because they aren’t looking for it and don’t think it’s a problem not to have it.

Sure they reject “certainty” (which they read as infallibility), but they don’t demand “certainty” either or criticize ideas for being “uncertain” (because they’d consider that a demand for infallibility, which they’d consider a mistake to demand).


curi at 2:02 PM on October 29, 2016 | #7207 | reply | quote

> There is a fundamental problem in epistemology: how can knowledge be gotten from non-knowledge?

> There is only one answer that has ever been thought of, by anyone, which makes any sense. That answer is the theory of evolution. Knowledge can be created by replication with variation and selection.

Replication of what, if you started with "non-knowledge"?


Anonymous at 2:20 PM on October 29, 2016 | #7211 | reply | quote

> Replication of what, if you started with "non-knowledge"?

for example, crystals.

more generally, some information.


Anonymous at 2:24 PM on October 29, 2016 | #7215 | reply | quote

it's really getting new knowledge you don't already have that's interesting, even if you do start with some knowledge. i shoulda said that clearer.


Anonymous at 2:25 PM on October 29, 2016 | #7216 | reply | quote

No high bar for knowledge claims.

> I think Popper overestimates the danger of upgrading it to “certain” (and without an “evidence for” perspective on knowledge, it’s unclear to me what the equivalent translation would be within Popper),

If by “certain” you mean “meeting the standard of knowledge” then I don’t think so at all. Popper calls tons of stuff knowledge. He doesn’t consider it dangerous to claim you have knowledge when you recognize your fallibility and will reconsider if a problem comes up (e.g. you get new information that potentially contradicts what you thought you knew).

I’m also fine with calling stuff “true” if I think fallibility is implicit in the context. E.g. I’ll sometimes say, “I think X is true” which I think is a reasonable way to phrase a knowledge claim.

I don’t hesitate to say, “I think X” by which I think means: “I think X is knowledge. X is true as far as I know, and I’m prepared to act on and live by my judgement of X.” This is routine.

How can this be accomplished without “evidence for” or other positive “support” or “justification”?

By brainstorming and criticism, in relation to a particular problem, until you have exactly one non-refuted idea how to solve that problem. For problems regarding human life where human decision making and action is required in a timely manner, this is always achievable in a timely manner. (Popper gave some clues about this but didn’t fully understand it. He generally said more like you look at the evidence and the criticism and then you make a (tentative) judgement call about which idea is best, which he calls a “critical preference”.)

If what I’ve said so far makes sense to you, we can go into the method for getting to one idea, which makes support unnecessary (the point of support/justification is for choosing between ideas — you choose the most supported/justified one.)

> but in my experience it seems that many people do use certainty as an excuse for lazy thinking. It’s a complaint that needs to be addressed.

Some people want to be done thinking. They want safety. They want guarantees. They want no possibility of every having to worry about thinking about an issue again. Popper says we never have that. So does Objectivism. Addressing that error is important in general, but I think it’s a side issue here.


curi at 2:37 PM on October 29, 2016 | #7218 | reply | quote

The contradiction isn’t clear to me, I think it’s coming from infallibilist premises.

> These are in contradiction, since the second contradicts the fact, acknowledged in the first, that the laws of physics are known with certainty.

This reads to me as claiming infallible knowledge of the laws of physics, which I disagree with. My understanding of the laws of physics is fallible.

Trying to read it another way, let’s consider if it could mean that my understanding of the laws of physics meet the standard of knowledge (not the standard of infallible omniscience). I would agree with that, but then I wouldn’t see any contradiction anymore. The contradiction seems to be based on contrasting certainty against fallibility and saying they contradict. My best understanding of what’s going on is that Reed is an infallibilist (but a mixed one, not a consistent one).

BTW I think it’d be clearer if Reed defined what he meant by “certain”. Is he agreeing with and using the intentionally infallibilist definition I gave? I don’t find it clear.

Did Reed think I was advocating the possibility of certainty as I defined it? I was not! There are no physical processes which offer a 100% guarantee against error.


curi at 2:44 PM on October 29, 2016 | #7219 | reply | quote

Measurements, like everything, are fallible.

You can misread the results of the measuring device.

In reality, are no measuring devices that 100% guarantee results within some error bars. Measuring tools can malfunction and give readings outside the expected precision for a variety of reasons.

You can misunderstand the nature of what you’re measuring and therefore measure it wrong. Measurement relies on some conceptions of how your measuring tool works as well as what you’re measuring.

You can misuse measuring devices. For example, someone wanting to measure a pile of sand might put it in a bad and weigh it, but forget to weigh the bag alone and subtract it. Many measuring devices have some complexity to how to use them, and there is no law of physics or logic which rules out making mistakes when using every very simple measuring tools.

None of this diminishes the efficacy and value of measurements. It’s a technical-logical point relating primarily to the laws of physics and logic. Working out the consequences for more abstract fields is a many-step process, so please don’t jump to skeptical conclusions. And please note I am not questioning or doubting any particular measurement (nor all measurements), which I only do when I have some specific reason to doubt in that case. (E.g. a measuring device is old and has been giving me trouble. Then you use it and get a surprising result. So I tell you it may be busted. Or there’s a complex measuring device relying on some theories of physics which are published in a paper. I find a mistake in the paper, then doubt the measured results.)

You ask me to argue for universal fallibility. But what am I to say? Propose an infallible physical process and I’ll tell you what’s wrong with it.

As an aside, we should be careful with category errors. It’s perfectly acceptable to regard a scale as neither fallible nor infallible, because error is a concept that applies to thinking and ideas, whereas a scale is an inanimate object that simply follows the laws of physics. That doesn’t prevent a scale from malfunctioning though. The same thing applies to eyes. (I’m OK with terminology that calls eyes fallible or terminology which says fallibility doesn’t apply to them but does apply to interpretation of sensory data. Either terminology seems reasonable to me.)


curi at 4:44 PM on October 29, 2016 | #7226 | reply | quote

How measuring devices work.

All measuring devices, being precise, have an N% chance to give a measurement within M% of the correct answer, given various background assumptions which are fallible.

N is not 100.

One reason is that physical objects don’t always behave as expected in every day life. They very often do. But there are very low probability events (much less likely than astronomically unlikely) that are physically possible.

Atoms and molecules are always jiggling around (well on Earth they are, let’s not talk about absolute zero temperature right now). If by coincidence a bunch of stuff jiggles in the same direction and randomly avoids collisions, then basically it can move in that direction. The kinetic energy for this motion is present. It’s just extraordinary unlikely and normally their various motion basically cancels out (you get about as many motions to the left as right, north as south, and you have collisions, and the end result is that e.g. your iPhone is a solid object that doesn’t fall apart).

It’s like if you choose a random number from -100 to 100 a trillion trillion trillion times and then average them. You are extraordinarily likely to get a result very near 0. But it is possible to get a result of -90. Apparently spontaneous motion of macroscopic objects and some other oddities are like that.

So N may be something like 99.999999999999999999999999999999999999999999999999999999999999999999 but that is not in fact 100.

This wouldn’t be such a big deal if people would stop denying it. You can pick any confidence you’d like (99.99%, 99.9999999%, whatever) and achieve it. Just not literally 100%.

Besides, there are those background assumptions, which are fallible anyway. They include that the measuring device is properly constructed and is in good repair. There is no physical process to guarantee that 100%. And there’s the background assumptions that your understanding of the relevant laws of physics, properties of the thing(s) being measured, etc, are true. If you were mistaken about the physical properties of your measuring device, then your measurement may be invalid. This is not regrettable or painful. It’s really no big deal. The physical possibility of error is not evidence of error. Accepting this is no reason to doubt your measurements. That’s not what this principle is for. It’s for other stuff like understanding epistemology correctly (we’re fallible, rather than infallible, and this has some non-skeptical implications that can be worked out).


curi at 5:12 PM on October 29, 2016 | #7228 | reply | quote

Do Psychiatry Drugs Work?

One-line summary: Does anyone have scientific arguments and evidence that some psychiatry drugs work as claimed?

Does anyone know an academic paper, which they will vouch for being correct, which is adequate for us to have contextual knowledge that some psychiatric drug works correctly? An anti-depressant, anti-psychotic, or get-kids-to-listen-to-their-teachers medication would be good (those are categories I have doubts and criticisms regarding about) but a lot of other types would be fine too (I have broad criticisms of these kinds of drugs). I will reply to papers with either thanks and learning something, or a criticism of the paper.

I have looked at a variety papers, books, essays, etc, but never found one I thought was correct.

Non-academic references are fine too, as long as the quality is good. I would prefer something scientific and written before I asked this question, rather than ad hoc replies debating it.

Here’s some sense of what won’t work: I won’t be impressed by anecdotal evidence or any studies which aren’t double blind and placebo controlled. I also won’t be convinced by a correlation study which finds a correlation and then tells a story about a causation with no evidence or arguments for why it’s that causation rather than one of the other causations that would explain their data.

double blind link: http://www.orthorexia.com/wp-content/uploads/2011/11/Double-blind-studies.pdf


curi at 7:53 PM on October 29, 2016 | #7230 | reply | quote

I wasn’t talking about the finite precision of measuring devices.

You shouldn’t blame disagreements on other people’s ignorance or lack of effort. I’ve read ITOE multiple times, discussed it multiple times with people from a variety of places, and chewed it. I may be mistaken, but let’s stick to discussing the ideas instead of making personal remarks.

I don’t think you understood the point I was trying to make. The finite precision of measurement is the M% error above, e.g. the measurement will be within 0.3% of the true value. There’s also a chance (the N%) that the measurement is not within the expected precision. Due to the laws of physics, there is never a 100% guarantee the measurement result is correct to the precision of the instrument. In rare cases (very very rare with good tools), a measurement can be outside the error bounds. This is nothing to worry about in life in general, however it does contradict claims to literally 100% infallibility.

It’s also possible that, even assuming all your contextual background knowledge is true, you made a mistake applying it in this case. E.g. you could have gotten all the physics concepts for the tool right but made a math calculation error or just misread the output.


curi at 9:10 PM on October 29, 2016 | #7231 | reply | quote

Popper and I meant metaphysically possible, not epistemologically possible.

I think we can now clear up a lot of our disagreements about “possible”.

When Popper or I talk about “possible” and related concepts (like “impossible”, “may”, “can”, “could”) we are talking about what you call metaphysical possibility. We don’t mean epistemological possibility. That’s how our statements should be read to avoid misunderstanding.

I think of it as “physical possibility” (does not violate a law of physics) rather than “metaphysical possibility”. I don’t know why you add the “meta”. The laws of physics determine what can happen in physical reality.

Before proceeding, I want to clear up the difference, if any, between metaphysical possibility and physical possibility.

Here’s what I’ve worked out:

How We Know p. 276:

> Metaphysical possibility denotes an ability, potentiality, or capacity.

This seems broader than than a physical possibility which denotes a potentiality in reality. I’m adding an extra “in physical reality” qualifier when I conceive of what’s “possible”. I think reality is what HB cares about, too, so there’s no important difference here.

So I’ll use the term “physically possible” from now on to make it clear I’m not talking about epistemologically possibility.

Any objections? Any substantial difference I’m missing?

I had tried mentioning what’s physically possible previously, and defining my terms, and I didn’t seem to have success communicating about this. I don’t know why. (Maybe I caused some confusion by mentioning logic too? I mentioned it in an attempt to be clearer. But it’s actually redundant because there’s nothing that’s logically impossible but which is possible in physical reality.)


curi at 4:27 PM on October 30, 2016 | #7255 | reply | quote

Evolution is a theory in epistemology.

Regarding HB's claim that evolutionary epistemology is a stolen concept:

Evolution is a theory in epistemology, so it's not stolen by epistemology. This isn't well known. The applications of evolution to biology and Earth history are secondary even though, historically, they were discovered first, and, culturally, they are better known.

> Perhaps one wants to object that it isn’t the *literal* biological theory of evolution that is being appealed to, but that that is just a familiar model to use to illustrate the method of “conjecture and refutation.” What counts, this objection would go, is that knowledge is based on a process of eliminating error.

It's the other way around. The biological theory of evolution is *literally* an instance of the epistemological theory of evolution. That's one of the applications.

Writers on evolution like Richard Dawkins know something about this. They don't just talk about genes as the primary mechanism of evolution. They talk about evolution working with *replicators* in general, and genes being one particular instance of a replicator.

> This won’t do. Truth and knowledge are more basic than falsehood and mistake. You can’t refute without appealing to prior knowledge. “Knowledge” can’t be defined as: “that at which you arrive by using knowledge (to refute a “conjecture”). That’s circular.

I disagree with this in terms of physics. I think part of the disagreement here is because you're talking in terms of more approximate high level concepts, and I'm talking about more exact lower level concepts.

Laws of physics and initial conditions specify the starting information and the flow of information after that. (Information flow is a technical term relating to physics and computation.)

Some physical events select information. Some replicate information. Some vary information. (These are all information flow.)

In low level terms, knowledge is a subset of selected information, so it comes after selection. Knowledge cannot be created without information and selection as precursors.

The precursors don't already contain the knowledge that gets created. The precursors can involve other knowledge, which is great if you've got it, and this also works for getting started with no knowledge.

(Popper's understanding of evolution is less precise, and less related to physics, than this. Popper provided crucial clues, but credit here should go to David Deutsch.)


curi at 7:34 PM on October 30, 2016 | #7257 | reply | quote

Is there a criticism of accepting this idea (as knowledge) in this context? Yes or no?

> But, in your system, you can’t be certain of that.

and

> How can you be so sure?

I’m not. I don’t have an amount of sureness. That’s not part of my system. I’m just after knowledge.

I have an idea which solves a problem and has no known errors, flaws, faults, problems, or other ways it’s lacking, and has no error-free competitors. In summary, I have a non-criticized idea. That’s the proper standard of knowledge: a single, uncontested, non-refuted, problem-solving idea.

When judging an idea, ask yourself: “Do I know a criticism of accepting this idea (as knowledge) in this context? Yes or no?” If no, accept the idea as knowledge. The idea has proven itself by meeting every non-refuted standard, criteria, demand, etc, that you’ve made of it (including positive criteria which demand an idea have a particular merit).

Note: This uses a particular definition of “criticism”: an explanation of a problem/flaw with an idea that matters in context. “You’re wrong!” isn’t a criticism. The context issue is you have to consider what problem the idea is trying to solve (what purpose it’s for). “That idea is bad because it won’t accomplish X” is not a criticism of an idea proposed as a solution to Y; it’s irrelevant.

Thus I reject the certain/probable/possible/wrong continuum and replace it with refuted or non-refuted (which is boolean, not a continuum). There is no way to prefer one non-refuted idea over another because all good arguments you could use to choose between them are either criticisms or criticism-equivalents (non-critically-phrased arguments which can be translated into a criticism). Any argument for differentiating ideas either is a criticism, can be rephrased as a criticism while retaining the same content, or is wrong.

You worry a non-refuted idea may not be good enough. But if you see an idea hasn’t been given much consideration and isn’t meeting some standard it should meet, pointing that out is a criticism that must be addressed.

Doing everything primary in epistemology in terms of booleans (“True or false? Refuted or not refuted?”) is one of the improvements on Popper made by me and David Deutsch. There are precise details for the system (this is just a brief summary.) I think Objectivists ought to appreciate it. Ayn Rand liked clear, decisive, absolute questions like “Yes or no?” I do too. Amounts of justification, sureness, evidence, certainty, probability of truth, etc., in epistemology, are vague. People pretty much just make the amounts up subjectively rather than having an objective way to measure them or well-defined units to specify them. Booleans are better.

Atlas Shrugged:

> “I don’t give a damn about your opinion. I am not going to argue with you, with your Board or with your professors. You have a choice to make and you’re going to make it now. Just say yes or no.”

> “That’s a preposterous, high-handed, arbitrary way of-—”

> “Yes or no?”

> “That’s the trouble with you. You always make it ‘Yes’ or ‘No.’ Things are never absolute like that. Nothing is absolute.”

> “Metal rails are. Whether we get them or not, is.”


curi at 8:19 PM on October 30, 2016 | #7258 | reply | quote

HB said in post 13833 at http://www.hbletter.com/forum/member-forum/why-im-voting-for-trump/:

> But this kind of similarity is not good evidence that this person is like your father.

Why not? No explanation given. I have noticed this before. When rejecting ideas, Objectivists reject them on authority or for lack of justification. They do not use CR.


Anonymous at 2:03 AM on October 31, 2016 | #7259 | reply | quote

HB closed the Myth of Mental Illness thread which was my favorite. :(

He believes in "having last words". He is not going to let discussions to a conclusion. He is not interested in agreements being reached. He used the "authority" of a Ayn Rand's quote to make a point which doesn't even defend his idea that using force against children and "insane" people is right. People having the right to have legal guardians defending their interests does not imply the guardians have the right to use force against them "for their own good."

Except for the Mein Kampf reference, Alan's post was very interesting. Why use that book as an example? Why not say Atlas Shrugged? It spoiled the benevolent tone of the post to me.


Anonymous at 2:30 AM on October 31, 2016 | #7260 | reply | quote

I accept physical probability but not epistemological probability.

Probability of physical events (like dice rolls) and epistemological probability (the probability an idea is true) are different things.

The example is confusing because it involves both. Here are some clear things we can say about rolling a die:

The chance of rolling 1-5 is 83.33%. True.

The chance of rolling 1-5 is 100%. False.

The chance of rolling 1-6 is 100%. True.

There is a high chance to roll 1-5. True.

Rolling 1-5 is physically probable. True.

I know the die will not land on 1. False.

It’s profitable to bet $10 that the die doesn’t land on 6, with a $1 profit if I win. False.

And specifically covering the hardest part:

> the proposition that it will land on a number between one and five is probable

This is ambiguous about whether “probable” refers to physical or epistemological probability.

It could mean: “I know it is (physically) probable the die will land on a number between one and five.” That is a proposition without any stated epistemological evaluation (because we took the word “probable” to refer to a physical probability for the die roll). The epistemological evaluation should be true.

Or it could mean: the proposition “I know the die will land on a number between one and five” should be epistemologically evaluated as “probable.” That would be incorrect. The correct evaluation of “I know the die will land on a number between one and five” is false. You don’t know that it will be 1-5. It might roll a 6 this time.

A third reading: the proposition “It’s (physically) probable the die will land on a number between one and five” should have the epistemological evaluation “probable”. This uses “probable” both in the physical and epistemological senses. I disagree and say the evaluation of that proposition is true.


curi at 9:12 AM on October 31, 2016 | #7261 | reply | quote

> He believes in "having last words". He is not going to let discussions to a conclusion. He is not interested in agreements being reached. He used the "authority" of a Ayn Rand's quote to make a point which doesn't even defend his idea that using force against children and "insane" people is right. People having the right to have legal guardians defending their interests does not imply the guardians have the right to use force against them "for their own good."

getting help with your life, and the exercise of your rights, also doesn't mean they are being deprived of rights or have less rights.

for a baby, having a (good) parent is not incompatible with your rights or a deprivation of rights.

the mental illness case is pretty difference b/c there are people who say "I do not want to delegate X or Y. I'm fine. Leave me alone." and then they are fucked with by force.

children typically want lots of help from their parents (which unfortunately has some unwanted things bundled in, which is bad).

some "mentally ill" people want some kind of help, in which case there's no difficulty of doing it on a voluntary basis. some don't, in which case "expert" claims are used to justify the initiation of force against them.

HB supports the initiation of force. really sad.

and did you notice the new psychiatry drugs thread? so far no one has replied with a study, with **science**. instead there's a reply saying it's so obvious that lithium is great that issues like bias are non-issues and you should just skip the science. so foolish!


Anonymous at 9:18 AM on October 31, 2016 | #7262 | reply | quote

Picked at random

Mein Kampf happened to come to mind at that time as an old book available on the internet. I didn't think of it as hostile.


Alan Forrester at 12:38 PM on October 31, 2016 | #7267 | reply | quote

Physical measurements can be wrong, meaning they can be outside the margin of error.

Adam Reed says, "Measurement is, in context, a guarantee against error." and "contextually infallible". He also denies that "all knowledge is fallible" and asks for an argument. He thinks the problem is that I don't understand the concept of "precision".

I recommend reading a science book like _Measurements and Their Uncertainties: A Practical Guide to Modern Error Analysis_. I give two quotes showing the **two** numbers, not one, involved with measurement error. Given background assumptions, measuring tools have a percent chance (one number) to give a measurement within a margin of error (second number). The tool's precision is addressed in the margin of error. The chance for the measurement to be within the margin of error tells you the uncertainty of your measurements, which is never zero.

In section 3.3.1:

> Evaluating the error function of eqn (3.9) it can be shown that **95.0%** of the measurements lie within the range **±1.96σ** . [bold added]

A simpler example: you could have a ruler which measures lengths within ±1inch 95% of the time. This is how measurement works in science. Measuring devices have some probability (less than 100%) of giving a reading in a range around the correct value, and some probability of giving an outlier reading.

In some situations, with good tools, outliers are rare enough not to worry about them. But they're never literally impossible. (Physically impossible.) So Adam Reed is mistaken to claim that (given some background assumptions) physical measurements infallibly give readings within the margin of error (in other words, that outlier measurements are impossible).

In section 3.3.2:

> Here we discuss a controversial topic in data analysis, that of rejecting outliers. From Table 3.1 we learn that we should not be very surprised if a measurement is in disagreement with the accepted value by more then one error bar ... However, as the fractional area under a Gaussian curve beyond 3σ or 5σ is only 0.3% and 6 × 10^−5%, respectively, we expect such large deviations to occur very infrequently.

In other words, outliers beyond 3 or 5 standard deviations are very infrequent (but their probability is not zero).

HB says 99.9999 is 100. While that *approximation* will work fine for many purposes, it doesn't work for everything. One thing the *approximation* doesn't work for is claims it's exactly, literally, non-approximately 100.

For example, the chance for a typical gas molecule to move 1 millimeter by itself (they're always moving around, but usually don't get far), and be that far off at a particular time, is around e^(-[1mm/wavelength]^2) which is approximately 10^-10000000000000. This is negligible in daily life. But if you say it's 0, you have made a false statement. Truth matters. Low probability events can happen, so one better have a way of thinking which doesn't contradict how physics works.

A reason this matters is because *error is always (physically) possible*. Therefore one always needs error correction, not merely sometimes. Attempts to turn this *always* into a mere *sometimes* cause trouble. Every false exception damages the principle. And infallibilist exceptions causing trouble is a common problem in our society.

HB adds:

> By the way, such a thing as evidence cannot, in principle, be measured with anything like that precision. In fact, I doubt that it can be measured numerically at all, except, possibly in cases in applied probability theory, and those are not the concern of epistemology as such.

The discussion about scientific measurement isn't about measuring evidence, it's about measuring information (e.g. the length of a table). (And the 99.9999% wasn't precision, it was the chance for a measurement to be within the margin of error.)


curi at 4:28 PM on October 31, 2016 | #7275 | reply | quote

Clarifications

> The equivocation between metaphysical and epistemological possibility is going on here:

There's no equivocation. We only mean metaphysical possibility.

> > When Popper rejects “irrevocably true statements,” he’s saying that in a future context we may get new information and change our minds. Popper thinks of this in terms of fallibility. Whatever we do, we may have made a mistake.

> “May” can be used for either metaphysical or epistemological possibility. We can use either throughout:

> Metaphysical possibility: When Popper rejects “irrevocable true statements,” he’s saying in a future context it is not impossible that we will get new information and change our minds.

Yes, that's what Popper means.

> Yeah, but so what?

A reason Popper emphasizes the metaphysical possibility of being mistaken (no matter what you do to prove you're right) is that people often won't listen to criticism. I mean an actual, topical criticism of an idea (e.g. "You're wrong that socialism brings prosperity because economic calculation requires prices..."), not a generic statement of fallibility. The metaphysical possibility of error in general provides a reason to listen to particular criticisms.

There are other reasons to mention it which you could find by checking the original context of quotes. And contradicting infallibilism is a reason.

I agree it's pretty basic. We're talking about it because people selected Popper quotes on the topic and said Popper was mistaken. There was a misunderstanding. Misunderstandings are important to clear up in conversations.

This is actually a somewhat common experience I have. I say something simple and then people try to read some bigger point into it and then start arguing with me because they disagree with that bigger point (even when I very clearly say I'm not making a bigger point yet). No, I'm just starting simple! It's good to get the basics right so you can build on them! That's an important method.

Also: Popper and I don't think the metaphysical possibility of error is evidence of error. We never use "that is fallible" as a criticism of a particular idea. Fallibility is 1) *not a bad thing in any way* and 2) *applies to all ideas*, so how could fallibility ever be used as an argument against an idea? We know this. We agree. (I've repeated myself on this point because HB has brought it up several times.)

> As for “Whatever we do, we may have made a mistake,” that’s false on several counts.

It means there are no physical actions which offer a 100% guarantee against the metaphysical possibility of error. This is true. One reason is that thinking is a physical process which can have malfunctions.

> Second, the “may” has to have the meaning of “there’s some evidence,” but its alleged basis is the metaphysical capacity to err.

No, it doesn't mean there's any evidence of error. The basis *and* meaning are the metaphysical capacity to err. That's it.

You may find this obvious, but Adam Reed didn't know it and asked for arguments to this effect. I think many people have misconceptions about it.

> Popper is wrong to drive a wedge between “prove” and “know.”

I will grant this for the purpose of the discussion. But Popper was writing in his (wrong) terminology, so please don't misunderstand his meaning.


curi at 4:37 PM on October 31, 2016 | #7276 | reply | quote

Mistakes of Karl Popper

One-line summary: Some people do use “certainty” to mean that no mountain of counter evidence could ever dislodge their belief

Brian Yoder writes:

> Very often what opponents of certainty mean when they use that term is something along the lines of “Such a mental conviction that no mountain of new counter evidence could ever dislodge the belief.” Of course none of us mean that when we use the word “certainty.”

But Harry Binswanger does seem to mean just that, in some cases. In post 13854, he writes:

> But that metaphysical possibility of new information reversing one’s conclusion is applicable only in some cases. It does not apply to “2 + 3 = 5.” Nothing could overturn that. There’s no new information to be had about what it is to be 2 or 3 or 5, or about what it means to add the first two to get the answer, “5.” Number concepts are a special case, along with a few others, of concepts that are so abstract that there’s nothing more to learn about them (other than new relationships to other numbers).


Josh Jordan at 4:09 AM on November 1, 2016 | #7300 | reply | quote

someone asked for help arguing with anti-semites

One-line summary: They’re both anti-Jewish.

What effect has anti-semitism had on the world? A long history of violence, harassment, inferior legal status, discrimination, etc, against Jews.

What effect has Islamophobia had on the world? Its primary use is as a false accusation to harass and verbally attack the defenders of Jews and defenders of Western civilization.

So they’re somewhat similar. Both are used, in reality, in an anti-Jewish way.

Anyway, check out this link: Islamophobia: Thought Crime Of The Totalitarian Future

http://www.frontpagemag.com/fpm/256647/islamophobia-thought-crime-totalitarian-future-david-horowitz


curi at 9:47 AM on November 1, 2016 | #7301 | reply | quote

misc HB replies

OK, we have a lot of disagreements and we can't talk about them all at once. My plan is to (somewhat) quickly reply to some things to catch up, but then focus on important positive issues afterwards.

Regarding how much to emphasize fallibility, HB criticizes CR for being "so out of scale". But HB is not familiar with the overall writings of Popper, DD or myself, so he doesn't know how much or in what ways fallibility is used. What HB is judging by is conversations in which Objectivists persistently bring up fallibility and argue with it. We emphasize fallibility more in discussions when a critic is denying it, but that doesn't tell you how much it comes up in our own presentation of our views. I think the best thing to do at this point is to focus more on positive presentation of valuable ideas, rather than debating misconceived criticism.

HB's arguments don't take into account that all steps/actions/safeguards you can take against error are themselves fallible.

Regarding measurement, HB's position is in contradiction to uncontroversial science and he is not offering some rival scientific theories (e.g. different laws of physics than the ones he contradicts, or different mathematical formulas for physical calculations than the ones he contradicts.)

I meant 99.9999 as an abbreviation for the original number with more 9s, not as a new number.

Human thought is a physical process. Consciousness is physical and obeys the laws of physics. That doesn't mean you study physics to understand consciousness, but it does mean no claims about consciousness are allowed to contradict the underlying laws of physics.

I make zero exceptions to fallibility. Exceptions to principles do matter.

That water can freeze is determined by the laws of physics. Which physical processes create knowledge is determined by the laws of physics. The universal possibility of outliers is determined by the laws of physics.

The fact that atoms on Earth are always moving around (that's what temperature above absolute zero is), and may be out of place when measured, is determined by the laws of physics. The laws of physics say the probability for an atom to be out of place a certain distance drops off very heavily as distance increases, but not to zero. I wouldn't normally even mention this. It doesn't come up in a positive presentation of my ideas. I mention it because it's being denied, and when people are denying what I say then I strategy I often use is to look for very simple things we can agree on and work from there. But even the most basic, uncontroversial stuff I say is being denied. That's rough.

It's being denied because of some misunderstandings about methods (e.g. people are assuming everything I say must have some big point to it, when actually I'm intentionally trying to say very simple stuff to find some points of agreement to work from).

Attempts to clear this up have been met with more of the same, combined with impatience that I haven't yet said the important stuff because I've been busy responding about trivialities. And complaints that I'm talking about trivialities – directed at me, rather than at the people who are denying them!

People routinely argue with points that aren't actually what they will appreciate talking about. Asking nicely to move on hasn't gotten me anywhere, but I'm going to move on anyway.

> But Popper’s whole project, as you outline it, is in appealing to what is still “possible by the laws of physics.”

No, the project is to understand how we know. The growth of knowledge works by evolution. I went over this, though I would expect it to require reading and discussing multiple books to understand (start with David Deutsch's books).

Evolution has two basic components which are creation and error-correction. The second component is crucial because all creation is fallible.

For various reasons, it's the error-correction component that's more interesting. Brainstorming a bunch of ideas is easier than objectively figuring out which are good or bad and why.

One of the mistakes with the inductivist worldview is it focuses on methods of creating ideas, not on methods of error-correcting (including improving) ideas once you have them. And we see that same mistake in HB's belief that his understanding of small integers and arithmetic is the final perfect truth that error-correction can't ever apply to. The motivation here is interesting because what's the harm in using the normal methods of error-correcting thinking? If you're right, you have nothing to fear from error-correction.

HB says:

> Number concepts are a special case, along with a few others, of concepts that are so abstract that there’s nothing more to learn about them

Nothing more to learn!

I've already seen four ways I think HB's conception of integers and math could be improved. They don't specifically affect the outcome that 2+3=5, but they are illustrative of how things aren't as simple as they appear. There's plenty of issues involved in 2+3=5 to reconsider or debate or learn more about.

2 is an integer. The concept of integers, and their properties, is tricky. BTW, there are other types of numbers, they're tricky too, and differentiating integers from other numbers matters. If these were floating point numbers then 2+3 would add to something within FLT_EPSILON of 5, not exactly 5.

The set of integers is an infinite set. Infinity is really tricky. So tricky that HB is denying that some integers exist, and saying that a widely held belief about integers is false:

> For instance, it is widely believed that there’s a number like: 10^100^100. There isn’t.

So HB, at the same time, is 1) debating the properties of integers and taking a minority position (not just having a problem with infinity, but with large finite numbers), and also 2) saying his belief about a particular integer calculation could never ever be revised in the future.

I think that's unreasonable. When you're actively debating the the nature of integers, you shouldn't have absolute confidence in your specific claims about some math problems involving integers.

My second way I think HB misunderstands math is I don't think he understands math is a type of computation and the laws of computation are dictated by the laws of physics. Under other laws of physics, computation could work differently (I'm guessing he'll deny this, but I don't know). And 2+3=5 could potentially be revised when we improve our knowledge of the laws of physics, which could certainly happen.

My third way I think HB's understanding of math could be improved is I don't think HB understands that doing math is a physical process. Thinking is a physical process. Stuff can go wrong. You can misread a number. You can make a typo when writing the result. You can make a calculation error when doing mental math. You can use a calculator that malfunctions. There's no way to do something like absolutely 100% guarantee you haven't misread something. If you read it 5 times, you could misread it 5 times. (This isn't even rare. People routinely proofread essays 5 times and miss the same typo all 5 times.)

My fourth point is I don't agree that "Mathematics is a tool of measurement." Math is used for all sorts of things. Math plays key roles in writing and sending emails. Math is used for cryptography. Math is used for representing letters as numbers and then processing and displaying them. Math is used for budgeting. If I calculate that spending $2000 per month on rent will cost $24000 per year, that helps me budget, and I haven't measured something.

There are other difficult issues involved. Is math a priori? Some, all, none? People disagree about this stuff. It's hard.

10^100^100 is a number you can actually do computations with. E.g. I can double it. Or I can work out that it's 10000000000^200. Or I can square root it. If I square root it 10 times in a row, I get 90. But not the integer 90, that's rounded from what WolfraAlpha says)))))))))) is 89.7687. And if I square 90 10 times in a row, which is a calculation I can do, and my computer can do, then I'll get a number around 10^100^100. I just did it on my computer:

My ruby code: x=90; 10.times {x=x**2}; x

And the output is 2002 digits:

1394214727062367914687352879670157072326062321139981867597622842832032482673993274234850024760182945480272076581245800069562691222247613756153589976812373494337854015847291066933952552814911691420170821153079477718019547045674332114645606391409446736353641550166933892016135940389846869161786237025783521906362517760397421013848514743110837351028779931312121969626954646544698167192029706192523430266460141324855483079358472609004415556949434981919556727985963774597773926365755238734950863399943759859765437105700914382332572284205295945574573596097236397875195676214349999548433821004894009249959619040232593243010479659177831242751628805145427355281522289274838413728448102554703994718413488903794372503934015624766640199728129756602274499881777680026614368979845341843310799503993067568180613109480845349853746612032784365903492314310715694023894297689221804858389687649685235712612880380643104425852839332564045727486066401513073768002507254274968456076628213466760987033610000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000

I don't doubt, specifically, that 2 plus 3 is five, but I do have lots of doubts about HB's way of conceive of math, integers, etc. He seems to think big numbers aren't real even though I can work with them including in ways that relate them to the small integers he does think are real. And I don't think he should be putting effort into making exceptions to fallibility (to gain what?) in an area with multiple tricky, controversial issues! When people are trying to deny fallibility and error correction in some area, I always wonder why. Is it just to save time? They'd be better served learning faster methods. All they have to do is say, "That's open to reconsideration in principle, but I see no reason to spend my time on it now. Do you have a reason?" And I don't, in this case. I haven't asked to reopen whether 2+3=5, I've only asked not to make principle-violating statements about having omniscience over some issues.

Why is HB so bothered by fallibility that he will reply to a basic science book by attacking science with comments like "This [quote of basic science] shows the error in Popper’s approach"? He seems unaware of what the laws of physics say and how measurement works in science (it's not controversial, there's no serious proposals about any alternatives, and it's not coming from Popper.)

> It’s not approximate. It’s literally and exactly true that there’s no present meaning to a difference between a “measurement” of 99.9999999999999999999999999999999999999999 and one of 100. As I said, it doesn’t even matter if, as technology improves, a meaning can be given to that difference, because your point depends upon extending the numbers to the point of indistinguishability from 100. Your point needs to be that it’s an (alleged) possible discrepancy that due care can’t eliminate. But if one number is in principle indistinguishable from another, they are equal.

HB is not able to follow discussion about science. Again: 99.9999999999999999999999999999999999999999 is not a measurement or a precision. It's a confidence of not having an outlier. I said this already. He's just not familiar enough with science to follow. I'm not trying to be offensive but I don't know what else to say. I think this is true and relevant. Most people aren't familiar with science. And that's OK. And if you don't want to read the book on how measurement works in science, that's OK too. But then don't argue about it.

> Ironically, it is Popper who is the infallibilist, Platonic, reifier here. If we recognize that concepts are tools of cognition, we can’t make statements to the effect that we can always err because our concepts don’t “live up to” the infinities found in reality.

That's not the statement I made. HB seems to think, like Adam Reed, that I'm complaining that our measuring tools aren't infinitely precise. But I never was and have repeatedly clarified.

> No new knowledge of math can contradict or throw into doubt that 2 + 3 = 5.

This is a claim to be able to predict the future growth of knowledge omnisciently in some cases, which is a bad idea on principle, and there's nothing to be gained by it.

> My printer stops working. I change the cable to it and it starts working again. The truth revealed is that the cause of its stopping working was the cable (let us assume I don’t even turn the computer or the printer off before changing the cable).

The old cable may have just been plugged in loosely. The printer software may have malfunctioned and started working because changing the cable unplugged the printer which caused a restart. These are both common. Cables going bad is uncommon.

I think this illustrates typical neglect of the complexity of causality. "I did X. Then Y happened." does not mean X caused Y. But people find it very intuitive to attribute a lot of what happens to their intentional actions.

This is how myths get started. Someone has printer problems several times and keeps getting new cables and it keeps fixing the problems. He concludes the cables go bad often, which is false.

So, no, the truth hasn't simply been revealed. Thinking is harder than that.

> No, I wasn’t talking about measuring precision, but about measuring the degree of warrant for an assertion. I was talking about the axis of evidentiary support, from possible to likely, to certain. In most cases, the basic and everyday ones, we can’t assign a number to points on that axis. I can’t say, literally, “I am 87.248 percent certain that Joe is a moral person.” Nor, even “I am 40% certain that he works for Intel.”

Even in physics it's a mistake to think of measurement as being about numbers. You measure quantities like weight, length or speed. Weight is how heavy something is. You can *represent* weight by a number along with units, but that's different than weight *being* a number.

Numbers are like the English language. They help us write, think and communicate more conveniently. They're great. But it's dogs that exist in the world, and they are animals not words. And it's weights that exist in the world, and that's a matter of actual real stuff like mass and gravity. One shouldn't mix up the possibility of labelling something with a number, or not, with it *being* a number, or not.

Also you can technically label anything with numbers if you're willing to do it badly. So there's no fundamental difference in terms of whether something can or can't be labelled with numbers.

> Why is that something that anyone using proper cognitive methods needs a “guarantee against”?

You don't need these guarantees. The point is you don't and can't have these guarantees. So: stop wanting them, looking for them, or claiming to have them!

---

Everything good, valuable, nice, important, etc, about Objectivism can be achieved without contradicting science, denying fallibility, or trying to divorce philosophy from physical reality. Going forward I'll focus more on sharing how that works. More to come!


curi at 1:52 PM on November 1, 2016 | #7306 | reply | quote

Arbitrariness, pragmatism and consciousness.

Reason is all about error correction. Error-causing and error-entrenching ways of thinking are irrational. Error-finding and error-correcting ways of thinking are rational. All exceptions to reason are dangerous. Every excuse for why not to use reason in some case is damaging to the principle. And there's never anything to gain by thinking in some other way. There's always some misconceived motivation behind deviations from reason.

> Isn’t that just pragmatism?

When I draw a distinction between writing down a bunch of useless, purposeless facts (information), and knowledge, I don't think that's pragmatism. Not every fact is knowledge. It's important to differentiate between thinking with some design for a purpose, and raw information. The world is full of piles and piles of information. E.g a grain of sand on the beach has enough measurable information to fill many, many books. But analyzing and writing down all the physical facts about the grain of sand isn't knowledge, it's garbage.

Knowledge deals with purposes, design, adaptation, goals, problems, intelligent selection of the important stuff, information that can accomplish something, useful information, or, answers to questions. There are many different English words and concepts that all relate to knowledge because it's such an important concept that keeps coming up independently in many fields and lines of thinking. They're all getting at the same thing. I mention many because I don't know which will work best for this audience.

A fact that answers a question is knowledge. A fact a person picked out as worthy of attention is knowledge. A fact that was written down arbitrarily or randomly is not knowledge (people don't normally do that, but the distinction still matters, and anyway nature has filled the world with far more than trillions of facts, most of which are boring.).

> So, let me ask: what is Popper’s/Deutsch’s/your stand on the arbitrary? Is anything allowed to be hypothesized and tested, regardless of it having no evidentiary support?

We are not afraid of arbitrary ideas. We don't have to try to exclude them from the discussion. All ideas are rejected by criticism. Any idea is either trivial to criticize (one of the pre-existing criticisms you know offhand already refutes it) or worth some thought to address.

"You made that up arbitrarily" is an effective criticism in many, but not all, contexts. (An example context where it's ineffective is a discussion about notional, abstract sets of ideas, which, like very large integers, can be worked with in some productive ways.)

We don't ban people from saying, "What if..." We answer it. (If time and effort are the concern, there are methods of dealing with that like righting down general-case canonical answers and then giving a link to the next guy instead of writing a fresh answer. The important things, more than answering every individual, are that we know how to answer it (for our own sake) and that there is an answer in writing in public which lets people learn from it and potentially criticize and improve it.)

Ideas are only tested empirically if they have empirical content and survive a bunch of criticism. Testing is too much work to test an idea that hasn't already been exposed to a bunch of criticism; it'd be an inefficient use of time. Most bad ideas are refuted before they get to the point of being tested.

I also disagree about excluding arbitrary ideas from being true or false. I think they're all either ambiguous (which is common) or else true or false.

Here's a pointless, arbitrary claim:

3892034+9802341 > 4893234+8239434

This is unambiguous. It's either true or false.

I could check whether it's true or false. There's a fact of the matter. But that fact would be worthless. Coming up with dozens of arbitrary claims and then writing down facts about them wouldn't be knowledge. This ties in with what I was saying above.

> Now, the concept for what changes an entity is capable of undergoing needs to be called “metaphysical possibility,” not “physical possibility,” because it applies to consciousness as well as to matter. It is possible for me to evade. As a human being, I have that capability. But evasion isn’t a physical issue, and physics doesn’t study it. A simpler example is that I have the capacity to experience pain, but pain is not something physics studies.

Consciousness is computations done by matter. It's literally software. Consciousness is a physical process, not something separate.

Evasion is not a physical issue, and physics doesn't study it, but it is a layer above physics, with physics underlying it by a chain of connections, and no claims regarding evasion may contradict physics.

Physics does have some fairly straightforward implications about pain, e.g. that an event can't cause me pain at least until enough time passes for light from the location of the event to reach me. I mention this one because it's not useless: some people, e.g. voodoo believers, contradict it (they think stabbing the voodoo doll can instantly cause someone pain at a different location). If they drop the claim that it's instantaneous, then you can still refute them with a physics argument, but it gets a bit more complicated.

One of the important issues here is people often claim stuff is **impossible** while having no idea how to relate their claim to laws of physics. And they're frequently wrong. If you can't relate an impossibility claim to a law of physics, that's a good sign you don't know what you're talking about. Lots of what's claimed to be impossible is simply stuff that people don't know how to accomplish. Ignorance and impossibility are pretty different.

Anything which is metaphysically impossible is also physically impossible, and if you don't know the relation to physics then there's more to be understood. Claims about consciousness shouldn't just stop there, it's also valuable to relate them to physics.

> As I said in How We Know, you can always attain certainty–but sometimes that’s a certainty restricted to the meta-level, as in “I’m certain that I don’t have enough evidence to decide this now,” or “I’m certain that this is likely.” Or even: “I’m certain that I’m confused here.”

I agree with the concept here, on my understanding that "certainty" refers to having knowledge (meeting the standards of knowledge), not infallibility.


curi at 2:10 PM on November 1, 2016 | #7307 | reply | quote

Induction and weighing evidence.

> What’s the source of that idea? The logical processing of observation–or “What if?” That’s the issue.

To do logical processing requires to be able to think.

The basic way induction works is it presupposes thinking to do things like judge which correlations, patterns and observations matter (selective judgement out of the infinite set of arbitrary stuff).

Thus induction cannot explain how thinking works. It can only, at best, be higher level abstraction.

I don't think I'm just picking on this particular wording. It's a common theme with claims about induction. And there's a logical issue involved: any finite set of data fits infinitely many patterns. Making a non-arbitrary selection from that infinity requires already being able to think well. So induction presupposes thinking.

> Where I disagree is on rejecting the assessment of the evidence as ranking as possible, likely, or certain. “Possible” is the most interesting case, because “some, but not much, evidence” is what separates the arbitrary from legitimate hypotheses.

All ideas should be evaluated as refuted or not. (Count ambiguous ideas as refuted.)

There's never any need or way to compare two non-refuted ideas and decide which is better with some kind of amount of evidence. Any piece of evidence either does or does not contradict a particular idea (given a context with some background knowledge).

If two ideas are non-refuted, then they both have the same relationship to the evidence: they are contradicted by none of the evidence.

And what's the point of weighing evidence if not for choosing between non-refuted ideas? (The one with the higher score wins.)

When it comes to critical arguments, we shouldn't ever compromise and accept or act on an idea which we know a single criticism of. And we don't have to. I'll explain how to do that in the future, but for now just consider: if there was a way to do that, wouldn't it be better?


curi at 2:28 PM on November 1, 2016 | #7308 | reply | quote

Pragmatic concerns, non-refuted thinking, continuums, evidence, and seeking a devil’s advocate.

HB's attitudes to numbers and error seems pragmatic to me. Pragmatically, big numbers and tiny error possibilities don't make much difference to daily life. So ignore them. I get the appeal, but the practicality of disregarding something doesn't stop it from existing in reality. Approximations are great and useful, but claiming they're literally exact is false.

Also, error isn't just a rare thing. It's common. The rare cases were brought up because they're more clear cut and harder to deny because they involve things like laws of physics.

---

Regarding how to always accept and act on non-refuted ideas, I think I can explain this briefly in a way HB will understand given his previous comments. When you don't know something, you create a *meta idea* (HB mentioned meta ideas and certainty earlier). You ask a question like, "Given I don't know X or Y, what should I do right now?" You come up with an answer to that. You critically consider that new answer. If you have no criticism of it, you accept it and act on it.

Epistemology itself is fully boolean. Subsidiary thinking can deal with continuums. For example when considering what college to attend you could look at the student/teacher ratio (smaller is better) and the distance from your parents (larger is better). You could then "compromise" by selecting a school, we'll say Stanford, that scores well on both counts, but does not score the best on either metric. But in terms of fundamental epistemology, you ask yourself: "Do I have a criticism of attending Stanford? Yes or no?"

This could be fleshed out with e.g. explanations about why you care about those metrics and how to deal with multi-variate rankings (which is a huge problem in theory, but it's often not too hard to come up with something you have no objections to acting on).

There are many, many continuums you can define which have some use. These things are human constructs. There infinitely many possible scales you could rank colleges on. You define some rankings you care about and them use the rankings you constructed to aid you because you have an explanation of how they'll be helpful (in this case, as people often do, the explanation was left implied as unstated background knowledge).

This stuff like ranking colleges by distance or student/teacher ratio allows amounts, measures, continuums, etc, into our thinking. But it's not a primary part of epistemology, it comes later.

What about the evidentiary continuum? That one is no good at all because there's no such thing as evidence for something. The "support" relationship between evidence and an idea is a myth with logical problems. Logically the relationship is (given background context) that some evidence either does or does not contradict an idea.

---

I propose a demonstration. I think that'd help clarify my method. I suggest HB play devil's advocate and present some critical arguments and questions regarding biological evolution. They would include something arbitrary, and whatever else HB isn't clear on how I'd address. Then I'll address it.

I suggest evolution because it has some relevance, it's non-trivial, it's an interesting topic, and it's something we agree on. The point is about how we get to the same conclusion (accepting biological evolution as knowledge) in different ways, and arguing about particular claims (e.g. Lamarckism or intelligent design) would be a distraction but hopefully won't come up.

Questions or arguments regarding the college selection scenario would also be welcome.


curi at 3:14 PM on November 1, 2016 | #7311 | reply | quote

replying to a different guy about induction and direct perception

One-line summary: Indirect perception doesn’t block unlimited human progress.

To make induction more powerful that “induction be repetition”, you’re assuming intelligent thought as part of the inductive process, which prevents induction from being an explanation of thought.

Creating causal explanations is a process that itself needs explaining. How do we do that in the first place? That’s what epistemology needs to answer.

You also bring up integration. A good integration is knowledge. The issue is to explain how to create new knowledge. So it again presupposes the stuff we’re trying to figure out how to do.

You don’t address the logical issues of confirmation (how is it different than non-contradiction)?

As to perception, there’s a mix of disagreement and misunderstanding. Our eyes are theory-impregnated in the same sense that glasses, telescopes, rulers, scales, etc, are. They perform some measurements and not others. They function a particular way out of the many conceivable ways they could have been designed. Our eyes detect green light but not ultraviolet light. Our ears detect some frequencies of sound and not others, and they work differently than the ears a dog has. I think the term “theory-impregnated” is confusing if you aren’t really familiar with Popper, and so this point is misunderstood. There’s also more to it, but can you agree with this point so far? This of course isn’t a bad thing. It’s fine and doesn’t harm our ability to know, though we do have to acknowledge it and do things to deal with it (such as making glasses and making tools that can detect sounds and lights we can’t see with the perception tools that genetics gave us).

Also, as a scientific matter, computation is done on the information from our eyes before it reaches our mind. In my understanding this computation is lossy, not lossless (meaning, among other things, that it’s irreversible — there’s no way to get the original raw data back). This is, again, fine. It doesn’t ruin our ability to know. It is important to understand it so one can deal with it well.

It helps to read The Beginning of Infinity chapter 2 which explains how the use of tools (e.g. microscopes), while seeming to put a layer in between us and reality (me -> microscope -> reality) actually (with good tools) helps us learn about reality better.

I think it’s not just misunderstanding of Popper’s meaning but also you’re going to disagree with some of this. You’re going to claim direct perception is a reasonable concept, even though our eyes and some photons are in between our mind and the object we perceive, so it’s indirect. You’ll do this partly because you insist we can know reality (which is true) and partly, I’m guessing, because of some kind of mistaken concern that without direct perception we can’t know reality (which is false) so you have to protect direct perception even though it’s pretty straightforwardly wrong because our eyes and the photons that carry information to us are not directly part of our minds.

None of this limits humanity because we can make tools to help with problems genetics didn’t solve for us, such as measuring infrared light. We don’t need direct perception of infrared — or anything else — to learn about it. Indirect perception is just fine.

Perhaps you want direct perception to try to eliminate the possibility of error, but it’s better to use error-detecting-and-correcting methods of thinking (aka reason) instead.


curi at 6:46 PM on November 1, 2016 | #7314 | reply | quote

the summary was hard to write for this one

One-line summary: Some agreement.

> This fascinating discussion is moving so fast!

> But with apologies for going back, Elliot Temple says

My attitude to discussion is I’ll still be interested in philosophical issues in a week or a year, and would still be responsive about older posts then. These ideas are timeless. No rush. No apologies needed.

In my view, everyone should participate on their own schedule, and discussion being asynchronous is really convenient. I don’t like to put much effort into managing my sometimes-bursty writing for other people’s convenience (e.g. spread it out with just 1000 words per day), and think it makes more sense if I just write when it’s convenient for me and then they read it and reply whenever is convenient for them. (Unfortunately in practice I find lots of people usually won’t ever reply if they don’t reply the first two days.)

For my own forums, I actually decided against using a subreddit when I found out they block new comments on all discussions after 6 months. I sometimes reply to 20 year old emails, even if the author is long gone, if I think the ideas are still interesting to discuss.

ET wrote:

>> I have an idea that solves a problem and has no known errors, flaws, faults, problems, or other ways it’s lacking, and has no error-free competitors. In summary, I have a non-criticized idea. That’s the proper standard of knowledge: a single, uncontested, non-refuted, problem-solving idea.

Gary Judd replied:

> Can we not put this into traditional Objectivist terminology by saying that an idea qualifies as knowledge if it can be reduced to the facts of reality and integrated with the rest of one’s knowledge all without any contradiction emerging.

I agree that substantial parts of what I’m saying are compatible with Objectivism. I find understanding an idea in multiple ways is valuable.

There are some differences, due to your phrasing, where we may agree but it’s not clear. When you talk about “the rest of one’s knowledge” I’m concerned that some disliked criticisms from third parties may be disregarded (because they aren’t yours) when they ought to be answered.

As to contradictions, it’s believed that quantum mechanics and general relativity contradict each other. Both of them are still used anyway to accomplish things where that contradiction doesn’t ruin their usefulness. An idea having a flaw in general doesn’t necessarily mean it won’t work for any purpose. The flaw or contradiction it has in general isn’t always relevant depending on what you want to use the idea for. Another example is Newton’s Laws which are false because they contradict reality. But people use them sometimes anyway. We have a good understanding of why they’re false and under what circumstances they provide an easier way to calculate an answer that’s close enough, and in what kinds of situations they’ll be too far wrong. From what you say, I don’t know if we’ll have pretty compatible views on this, or not.

Some of the main things I disagree with Objectivism about are: induction, supporting evidence, exceptions to fallibilism, and any kind of degrees or amounts or probability of truth, justification, or certainty. I don’t think ideas should ever be given any kind of score or score-equivalent in epistemology (though they can be ranked with scores in secondary ways, e.g. the ranking colleges by student-teacher ratios example above. It’s specifically epistemology scores for how good or true an idea is that I object to. A student-teacher ratio ranking is fine and useful but doesn’t directly tell you how good or true any idea is.)

> Or does Mr Temple wish to dispense with the reduction to the facts of reality and if so, why?

I think everything should be connected with the facts of reality. I also think it’s valuable when an idea goes beyond the facts to say more.

> (I draw the inference that “that solves a problem” means that there is reduction to the facts of reality, otherwise there wouldn’t be a problem to solve.)

It means ideas need to have some kinda purpose, e.g. answering a question, rather than no purpose at all. There’s no way to judge an idea with no purpose (besides rejecting it for being pointless). Ideas need to be judged by whether they successfully do whatever it is they are trying to do. Ideas should be clear on what the point of the idea is and how it accomplishes it. (In many cases this is clear from context without explaining.) Whether the purpose of the idea is a good purpose is also open to judgement.

We don’t just have ideas for the heck of it, as a whim, or arbitrarily. Each idea should do something, have some value it brings.

I’m guessing the concept I’m getting at here is fairly simple, not a big deal, and you probably already believe something similar. But I don’t know which way to say it will be clearest for Objectivists who aren’t used to Popper’s problem solving terminology.


curi at 10:47 PM on November 1, 2016 | #7322 | reply | quote

Are the real numbers invalid?

> We always have to keep to the context that gave rise to our concepts. Numeric concepts that demand infinite precision are invalid.

Each real number (https://en.wikipedia.org/wiki/Real_number) is infinitely precise, because each real number is equivalent to an infinite string of digits. Are the real numbers are invalid?


Josh Jordan at 11:31 PM on November 1, 2016 | #7324 | reply | quote

Which integers can be said to exist?

> The standard of precision has to be kept to what we can successfully use in measurement. For instance, it is widely believed that there’s a number like: 10^100^100. There isn’t. There can’t be such a “number” in the normal meaning of the term, because that would require all the intervening numbers between one and 10^100^100.

Does the number googolplex (10^10^100) exist? Matthieu Walraet showed [1] that there are at least that many possible go games. If we increase the board size much beyond the current 19×19, we will easily reach 10^100^100 possible games. Is counting such things a valid use for a number?

[1] Matthieu Walraet. A Googolplex of Go Games. (Jan 9, 2016)

How about the number 10^10^123? The physicist John Baez wrote [2] that if we turned “the observable universe into a black hole… [it] would have 2^(10^124) [approx. 10^10^123] quantum states. That’s the biggest number I know that has any good physical meaning. It’s big… but still tiny compared to plenty of numbers I can easily name, like Graham’s number.”

[2] Baez, John. How many bits of information could you fit in the whole universe? (Jan 24, 2013)


Josh Jordan at 11:31 PM on November 1, 2016 | #7325 | reply | quote

Is the maximum valid integer bounded by the laws of physics or by technology?

> Numbers like, 945919593906195785439620697686228904575 08946673734968568783476216267 . . . carried on for page after page. There are not that many electrons in the universe.

Suppose we learned that we overestimated the size of the universe. Might numbers that were previously thought to exist then go out of existence? Might this discovery affect the validity of our previous calculations?

More generally, in order for a number to exist, must it be possible to represent every intervening number exactly using today’s technology? Or is there some theoretical principle based on the laws of physics that limits the largest number that can be exactly represented with any possible future technology? For example, would the maximum positive integer that could be said to “exist” have been smaller in the middle ages than it is today?


Josh Jordan at 11:32 PM on November 1, 2016 | #7326 | reply | quote

Big numbers and the human brain

> Nor can the human brain distinguish that string of numerals from one that had a single different value on page 3288 of the thousands of pages it would take to print it out, if it could be printed out.

Whose brain determines the limit? For example, suppose you practice and become exceptionally good at distinguishing strings of numerals. Do more integers then exist for you than for me? Or does the fact that at least one other living person can comprehend such large numbers make them valid for me to use as well?

Is there a theoretical limit on the largest integer that can be comprehended by the human brain, no matter how much the person practices? Or if we are someday able to augment our brains with technology that allows us comprehend larger numbers, would that affect the size of maximum positive integer that exists?


Josh Jordan at 11:32 PM on November 1, 2016 | #7327 | reply | quote

a few replies

Yes you have to have an idea in your mind to think about it. But there’s an issue of which third party ideas you ought to listen to and address. I don’t know what your answer to that is.

Newton’s laws are sometimes used today, when we know they’re false, because we understand what kinds of questions they get wrong and by how much. We use Newton’s laws for easier approximations in scenarios where we know the error will be small and an approximation is acceptable. Their advantage is simplicity.

Yes every contradiction indicates a problem which would be good to solve. But we may still use both of the contradicting ideas in the meantime in limited ways, as we do with quantum mechanics and relativity. Relativity contradicts quantum mechanics, and we know there’s a problem there, but that doesn’t prevent us from using relativity successfully for GPS devices. Not every contradiction, or other problem, ruins every use case for an idea.


curi at 11:34 PM on November 1, 2016 | #7328 | reply | quote

counting by 10s should be a valid way to reach a number

and why not count by 1000s? 1000000s? etc


Anonymous at 1:22 AM on November 2, 2016 | #7330 | reply | quote

> the summary was hard to write for this one

"discussion is asynchronous and agreement and disagreement with Objectivist epistemology"


Anonymous at 2:24 AM on November 2, 2016 | #7339 | reply | quote

> Picked at random

> Mein Kampf happened to come to mind at that time as an old book available on the internet. I didn't think of it as hostile.

I might have come to your mind randomly, by why did you accepted it uncritically? You had a first conjecture and accepted it as truth. Seems bad.

You drop an evil book, don't you think it's going to interrupt? I got interrupted, I was thinking:

"Wtf? Mein Kampf? Why is this guy mentioning an evil book? Is he trying to seek attention through shock value? Does he consider it good? Does he consider it worth reading even if it's bad so we criticize it? Is there some current thing going about the book I don't know? Did he put this on purpose to test people's rationality because rational people would not be distracted by this and if they are he can call them on their irrationality? WHAT IS THIS MYSTERY WHAT IS GOING ON?"

If you wanted a random book just to serve as an example of your point, the most harmless book or the most in common with your audience would have made more sense, no? Otherwise it's going to jump out.


Anonymous at 2:51 AM on November 2, 2016 | #7340 | reply | quote

hard to read these comments on an iphone. font size is too small.


Anonymous at 2:58 AM on November 2, 2016 | #7341 | reply | quote

i increased mobile font size some


curi at 9:13 AM on November 2, 2016 | #7342 | reply | quote

HB thinks i'm angry and is a dualist(?)

One-line summary: I’m not angry.

> I guess I am to blame for the discussion turning angry, so I apologize and suggest we get back on a more benevolent footing.

I’m not angry. I don’t know where that comment is coming from.

I disagree that those school courses mean much. Some people have full degrees, even advanced degrees, in any given topic without understanding much about it. I didn’t intend to and don’t want to argue over credentials, though. Alan Forrester has better physics credentials than I do, but it doesn’t mean he’ll be right if we have a disagreement relating to physics. What I saw in this conversation was multiple basic misunderstandings of scientific comments I made (like mixing up which number means what).

I totally disagree with separating consciousness from the physical world! Did Rand ever write something about that which I missed?

I think consciousness is a software process running on the brain which is literally a computer (and physical object).

More comments later.


curi at 8:41 PM on November 2, 2016 | #7348 | reply | quote

Arithmetic is Complicated and Fallible

There have been recent claims about the infallibility of some simple claims, e.g. that 2+3=5.

I think people underestimate what's involved in doing a calculation like 2+3. Talking about how a silicon computer calculates 2+3 may help people appreciate the complexity of computation. This may make it more intuitive for people that there's plenty of scope for something to go wrong when doing mathematical calculations. *The way your brain works is a lot more complicated and error-prone than this*.

How can a modern computer compute 2+3? This will be heavily simplified but the concepts are about right.

Computers use the base two number system (binary). 3 in base ten is 11 in base two.

The digit on the right is the 1s digit, then comes the 2s digit, then the 4s digit, then the 8s digit, etc. The value of each digit is 2^0, 2^1, 2^2, 2^3, etc. This is like the base ten numbers you're used to where the value of each digit, starting on the right is 10^0 (1's digit), 10^1 (10's digit), 10^2 (hundred's digit), etc.

Computers commonly use 64 bit numbers now, so 3 would be represented as 62 0's followed by 11. Computers can deal with other number formats but that adds more complexity.

A single digit in base 2 is called a "bit" and 8 bits are called a "byte". As a byte, 3 is 00000011. 2 is 00000010. A byte can represent a number from 0 to 255. The highest digit is the 128's digit.

BTW, when you write here, your letters are coded as bytes. Ignoring unicode, there's a conversion table. The letter "x" is represented by 120, which in binary is 01111000. "2", treated as a letter you can write, is represented by the number 50 which is 00110010.

Why do computers use 1s and 0s? Because they're easier to deal with physically. It's easier to build something which can store or transmit a 1 or 0 than dealing with multiple values. For example, you can send electric current along a wire (1) or not (0). If you to use wanted base 3 (which has some theoretical advantages) you'd have to both send and detect different amounts of current which is harder and more error prone. It's already hard enough to manufacture billions of transistors on a tiny chip!

Before we can add numbers, we need to begin with simpler operations. I'll talk about some logic gates and how to physically construct them. We'll begin with the logical NOT operation which takes a single wire as input and a single wire as output. The output is 1 if the input is 0, and the output is 0 if the input is 1.

The physical components for calculating NOT, in order: positive voltage, output, input, ground.

Explanation: the current will go to the ground if it can. But basically the wire is blocked off at the input location if it's 0, and allows the current to flow if the input is 1.

So when the input is 1, the current is able to flow to the ground. The output is then 0 because the current isn't going there. But when the input is 0, the current can't flow to the ground and instead flows out the output making the output 1.

Next we'll look at a NAND gate. NAND stands for "not and", meaning "not both". A NAND gate takes two inputs. If both inputs are 1, it outputs 0. Otherwise it outputs 1. NAND is great because it's universal and it's relatively simple to construct. Anything that can be done with logic gates can be done with multiple NAND gates. For example, using NANDs you can calculate NOT, OR, XOR, and AND. (XOR is "exclusive or", which is true if either input is true, but not both.)

A NAND gate is designed like a NOT gate with second extra input. In order: positive voltage, output, input, input2, ground.

Now for the voltage to reach the ground, both inputs must be 1, otherwise it can't get there. So if both inputs are 1, the output is 0. Otherwise the voltage doesn't reach the ground and flows through the output, so the output is 1.

Storing a single bit in memory is more complicated than this. Moving on towards adding:

To add two bits (A, B) we'll need two output bits, the sum (S) and carry (C). If both inputs are 0, then both outputs are 0. If either input is 1, we'll want S=1 and C=0. But if both inputs are 1, then we want S=0 and C=1. So S is determined by A XOR B and C is determined by A AND B. This is a half-adder.

A [full adder](http://isweb.redwoods.edu/INSTRUCT/CalderwoodD/diglogic/full.htm

) is needed to add two numbers with multiple bits. For example, you need to be able to add the 3rd bit of both inputs as also a carry. So a full adder has 3 inputs. (A, B, C) in order to deal with carrying and has 2 outputs (S, C). A full adder is constructed from 2 half-adders and an OR gate. (I'll skip how it's built, click the link if you want.) Note the full adder can be built only from NAND gates which we know the concepts to construct physically.

Now we can add 2 and 3. We'll write them as 3-bit numbers to keep this shorter, so 2 is 010 and 3 is 011.

We start adding on the right. We use a full adder with both of our 1s digits and a carry of 0.

FullAdder(0,1,0) -> (1,0)

Now we know the 1s digit of our answer is a 1. And we got a carry of 0. Next we'll add up our 2s digits, along with the carry:

FullAdder(1,1,0) -> (0,1)

This time the sum is 0, but the carry is 1. The 2s digit of our solution is a 1. (So far our solution has 01 on the far right.) Now to add the 4s digits with the carry of 1:

FullAdder(0,0,1) -> (1, 0)

We got a 4s digit of 1, and no more carry. If we kept adding we'd just get 0s the rest of the way. So the result of our addition, and putting back on the 0's we omitted calculating, is 00000101. (I've put a 1 for the 1s and 4s digits, as we calculated.) That's 5 in binary. Yay! 😁

With 8 full adders we could add two 8-bit numbers. It'd involve 8 AND, 8 OR and 8 XOR gates, and quite a few wires. We'd also have to worry about overflow. What if you have a carry on the last bit? You can either just throw it out (so 255+1 adds up to 0) or you'll need something more complex to deal with it.

While this can be done with NAND gates, it's more efficient to use a mix of gates.

We've dealt with irreversible classical computation. There's also quantum computation and reversible computation, both of which are interesting.

We haven't dealt with timing. CPUs go step by step (billions of times per second) using a clock and everything has to be precisely timed and kept in sync.

We haven't dealt with storing and retrieving data in memory.

Sending signals over the internet, like any message about 2+3, requires a lot of effort for detecting and correcting transmission errors.This is a great article about computer error correction.

A fair amount of the above information is available in the Feynman Lectures On Computation which is a great book and covers some error correction too.

**Big picture**: what silicon computers do is more complex than I've conveyed, and the way your brain works is still more complex. There's plenty of complexity in a 2+3 calculation to make mistakes. There's many different components that could have an error, and which have to work together in exactly the right way to get the correct final result. The specifics of doing "simple" math aren't so simple.


curi at 12:22 AM on November 3, 2016 | #7352 | reply | quote

Induction questions.

I have some critical questions about induction. The first is how do you differentiate support from non-contradiction?


curi at 12:22 AM on November 3, 2016 | #7353 | reply | quote

Arithmetic calculation is complex; dualism; principled criticism.

HB I don’t think you’re engaging with my main point, which was pretty simple. It’s that there’s a lot more complexity in arithmetic calculations than people realize. (The use of lookup tables isn’t relevant. That isn’t doing the calculation and isn’t a way to figure it out in the first place or check it’s correct. That’s just remembering the answer from a previous calculation.)

I’m attempting to build up an understanding of the issues from simple, correct points. You’re attempting to argue a lot of very complex points right off the bat which bring up many disagreements at once, and you’re arguing primarily with things I didn’t say in this post. I don’t think that’s going to be effective for us.

One of the points of disagreement brought up is some sort of dualism which, from what I can tell so far, separates mind and brain not as hardware and software but in some other way. That one point is a big disagreement. Are the details of that written down somewhere?

> Finally, if one “is not afraid” of the arbitrary, then arbitrary claims have some sort of epistemic standing until refuted. But, on that premise, nothing can be refuted. Because when I say, “Here’s my refutation” the skeptic is licensed to say, “Maybe that’s wrong; you think it’s a refutation, but you’re a fallible being.” And any reply to that is subject to the very same skeptical rebuttal.”

You seem to be assuming we have no methods of dealing repetitive skeptics. In the Popper thread, I proposed you play devil’s advocate (regarding biological evolution) so I can show you how this is handled, which I think will work better than explaining the concept. The concept, briefly, is that arguments can address categories of ideas, not just individual ideas. So you don’t need a new argument every time a skeptic repeats the same error.


curi at 10:17 AM on November 3, 2016 | #7361 | reply | quote

Fallibilism isn’t skepticism.

Fallibilism isn’t skepticism, so I don’t see the relevance of anti-skeptical arguments.

Ayn Rand clearly advocates fallibilism. ITOE:

> Man is neither infallible nor omniscient; if he were, a discipline such as epistemology—the theory of knowledge—would not be necessary nor possible: his knowledge would be automatic, unquestionable and total. But such is not man’s nature. Man is a being of volitional consciousness: beyond the level of percepts—a level inadequate to the cognitive requirements of his survival—man has to acquire knowledge by his own effort, which he may exercise or not, and by a process of reason, which he may apply correctly or not. Nature gives him no automatic guarantee of his mental efficacy; he is capable of error, of evasion, of psychological distortion. He needs a method of cognition, which he himself has to discover: he must discover how to use his rational faculty, how to validate his conclusions, how to distinguish truth from falsehood, how to set the criteria of what he may accept as knowledge. Two questions are involved in his every conclusion, conviction, decision, choice or claim: What do I know?—and: How do I know it?

Rand says that beyond percepts you have to use a process of reason which you may not apply correctly (you could make a mistake). Math calculations aren’t percepts.

AS:

> Do not say that you’re afraid to trust your mind because you know so little. Are you safer in surrendering to mystics and discarding the little that you know? Live and act within the limit of your knowledge and keep expanding it to the limit of your life. Redeem your mind from the hockshops of authority. Accept the fact that you are not omniscient, but playing a zombie will not give you omniscience-that your mind is fallible, but becoming mindless will not make you infallible-that an error made on your own is safer than ten truths accepted on faith, because the first leaves you the means to correct it, but the second destroys your capacity to distinguish truth from error. In place of your dream of an omniscient automation, accept the fact that any knowledge man acquires is acquired by his own will and effort, and that that is his distinction in the universe, that is his nature, his morality, his glory.”

Notice Rand talks about error correction too (“the means to correct it”).


curi at 10:17 AM on November 3, 2016 | #7362 | reply | quote

How is information about probability?

HB, who is apparently a frequentist, writes:

> Information is a function of probability, and probability is, at root relative frequency. That’s my view, not anything that’s part of Objectivism. And I don’t claim it as a certainty.

What do you mean? In what sense do you think measurable information about a table (e.g. length, mass, color, number of carbon molecules) is (a function of) probability?


curi at 10:26 AM on November 3, 2016 | #7363 | reply | quote

3 posts

One-line summary: BoI criticism?

> I have read the chapter you cite

Do you have a criticism of some mistake you found in The Beginning of Infinity ch. 2? If so, has you or anyone else written it down?

One-line summary: Presupposing intelligence.

> Make an intelligent guess as to what the unknown Things and/or Characteristics might be using what you already know.

This presupposes intelligence (in order to make an intelligent guess) when one of the main issues under discussion is how intelligent thinking works in the first place (e.g. how are intelligent guesses made?)

One-line summary: Perception is indirect. So what?

> It seems like nothing would qualify as direct perception (other than perhaps introspection) under that standard.

We don’t seem to be talking about the same thing if you would consider introspection a form of perception. No sense organ is used in introspection.

> What is the point of splitting perception into direct and indirect forms?

There’s no split. All perception is indirect. There are things in between your mind and a dog you see, including photons, eyes and an optic nerve.

Just like human-constructed measuring tools, there are design flaws. Understanding these design flaws helps us use our sense organs better. Whereas if we naively and falsely think they give us direct and 100% accurate information about the world, we’ll be fooled.

Three examples:

Your eyes have a blind spot. If you know that, then it helps you realize that objects aren’t actually disappearing from the world when you sometimes can’t see them.

Perception isn’t instantaneous. When you see a lightning strike, you hear thunder later. That doesn’t mean the thunder happened later, as a naive direct-perception believer would have to conclude. By understanding what’s going on with your perception of light and sound, you can understand the world better and realize the lightning strike was around 1 mile away per 5 seconds of delay before you heard the thunder.

If your eyes see a lot of one color they adjust to it and if you then look at something else they don’t adjust back to normal instantly so you’ll see things wrong (they can look like a different color than they are). You can also be misled about color by the lighting of the room. Artists take this into account rather than expecting that at all times, in all lighting, their eyes give them direct and accurate perception of color.

> When you see a dog, what you grasp with perceptual immediacy is not that there are photons hitting your retinas, but that there’s something that looks a certain way over there.

Glossing over the complex process involved isn’t going to help anything. It feels instant, but it isn’t, and sometimes that difference matters. It feels direct, but it isn’t, and sometimes that difference matters. A lot of times approximations are good enough, but don’t claim those approximations are true.

> Within Objectivism, perception is necessary because positively supporting proofs have to start somewhere–from some base of knowledge that can be accepted as self-evident (not requiring evidence to support beyond itself). And I would think perception in some form would be necessary for any system seeking knowledge of reality.

Making false claims about perception because you think they’re necessary to get anywhere in epistemology is a really bad idea. At worst you could say that we do know (otherwise how can an iPhone work?), but some details of how we know are an unsolved problem.

Popperian epistemology addresses how we know with no self-evidents. Everything is open to criticism, including itself and our perceptions. We do have perceptions and they help us know reality, but they aren’t direct, self-evident or infallible. Optical illusions and other errors sometimes come up, and come up a lot more when dealing with high speeds, long distances, tiny objects, and other stuff that our genes aren’t very optimized for dealing with.

Regarding “information”, making statements about computations and physics is important to understanding Popperian epistemology. It does play a role in our philosophy which is more integrated with physics than Objectivism is. I think that greater integration is a great thing.


curi at 11:02 AM on November 3, 2016 | #7364 | reply | quote

Computers don’t compute?

ET:

>> Also, as a scientific matter, computation is done on the information from our eyes before it reaches our mind.

HB:

> No computation is done there. That’s metaphor. There’s no computation done anywhere outside the human mind. Even computers don’t actually compute. In philosophy, we have to speak literally, not metaphorically.

I was speaking literally. Have you read science about our visual system? Information from the eye is processed in an lossy (irreversible, information-losing) way before it reaches the mind. In short, visual information is simplified according to some algorithms specified by our genes prior to perceiving it.

This is just like if you’re writing an iOS photography app but don’t have access to the raw images from the camera, only images which iOS has already modified with some algorithms. (Offhand, I think apps can access raw images now, but couldn’t in the past.)

But I don’t know what you’re talking about by saying computers don’t compute. My computer can compute 2+3 and NAND among many other things. I guess you must be using some non-standard definition of computation? To understand me, it’s important to read what I’m saying with my terminology, not some alternative terminology you prefer. (This came up a lot with Popper, too, where words he used were read with an Objectivist meaning instead of Popper’s own meaning.)


curi at 11:12 AM on November 3, 2016 | #7365 | reply | quote

Going beyond correlation.

> That’s the brilliance of Dr. Peikoff’s identification: we can’t jump in to begin the discussion of induction with the case of higher-level inductions. The first inductions, like “Rain wets the ground” do not require the sophisticated steps you mention.

How do you get from “rain is correlated with wet ground” (or more typically a statement like “it rained and then the ground was wet”) to “rain wets the ground” (causal statement)?


curi at 11:32 AM on November 3, 2016 | #7366 | reply | quote

How We Know doesn’t answer my questions.

I’m reading the book. I already read the chapter on perception. Not having read something isn’t the issue here. It didn’t help answer my questions or criticisms. I don’t recall anything in the perception chapter saying computers don’t compute, and just did a quick text search to double check my memory that it’s not there. As far as I can tell, the issue is that you don’t understand my (or Popper’s) positions, questions or criticisms (hence not addressing them in the book). I don’t know why you think you covered the issues I raise, but my guess is that you’ve misunderstood what issues I’m raising.

If you give a quote I can reread that section. You do say, in chapter 1, not chapter 2, that:

> Televisions cannot literally talk and computers cannot literally make computations.

But you don’t say what definition of “computation” you’re using. Not the standard one! TVs make sound but don’t talk, OK, that’s fine and I can see what your terminology is there. But I don’t know how to make sense of what you’re saying about computers not computing, which sounds to me like saying TVs don’t make sound.

You also say stuff about the impossibility of AI software that doesn’t address my views on the matter (there’s nothing I recognize there as a criticism of something I believe, or answering the questions I would ask, or handling the criticisms I would make of your position). A lot of the book is like this in my experience.

There’s a hint when you write:

> Addition is an action of consciousness.

And deny that computers add. But you don’t address, argue with, or talk about my view on the matter. You don’t answer the questions I would ask. You don’t address the criticisms I would make. So reading the book doesn’t cover this for me.

In this case, you don’t address that you can use a computer to add something and not actually do the addition yourself in your mind and still get the answer.

> Computers don’t follow programs, they simply obey the laws of physics. That’s all that goes on inside them.

That’s like saying ants don’t follow scent trails, they just obey the laws of physics. It’s some kind of abstraction-denying reductionism, which is refuted in David Deutsch’s books, and you don’t say anything that addresses, argues with, or acknowledges the existence of our position. (Maybe you’ll claim ants are conscious to get around this. I disagree with your take on animals.)

(By the way, the PDF uses special formatting that frequently breaks copy/pasting quotes. Examples are where there’s a “Th” or “fi” the PDF uses a single special character instead of the two letters, so then those letters are missing when you copy a quote. This also breaks text searches to find words.)


curi at 2:58 PM on November 3, 2016 | #7367 | reply | quote

ppl not paying much attention

The context of the question was the claim that “rain wets the ground” is a first level induction straight from perception. Using existing knowledge of water, liquid, gravity, sky, falling, on-top-of, plus some intelligent thinking to put them all together, doesn’t help with the epistemology issues under discussion like how to get started learning anything in the first place.


curi at 6:37 PM on November 3, 2016 | #7368 | reply | quote

explanations not primitives

13988

Re: Harry Binswanger’s post 13949 of 11/3/16

One-line summary: Limitations to knowledge should be judged in terms of explanations, not alleged primitives.

The sum 1+2+3+…+n for any natural number n is n(n+1)/2. For an explanation of how mathematicians worked this out see http://www.purplemath.com/modu…..nductn.htm. I have no particular problem with this explanation or with the existence of an infinite set of natural numbers. The reason I have no problem with it is that I’m focused on the explanation for any particular statement I come across. If the explanation solves a problem and has no known criticisms, then it poses no difficulty.

Some objectivists, such as HB, do seem to have a problem with statements like the formula I gave above. It refers to an infinite set of numbers. Since there is some highest number people have actually counted to HB claims that no higher number exists. So my question is: what is the flaw in the explanation given in the link above?

One of the problems with inductivism and similar ideas is that people think they are proving stuff from some basic set of primitives. This leads them to focus on trying to find a set of primitives they can base stuff on. This is not a good idea for a couple of reasons.

One of the problems is that any set of primitives can only be properly understood in the light of explanations. The colour of an item depends on lighting because your brain does complex stuff to interpret the information arriving at your eye in terms of differences in the signals it gets from different parts of the visual field. So if you think item X and item Y are the same colour, you may be surprised if you compare them under the same lighting.

Another problem is that it leads to a weird sort of conservatism where you make arbitrary distinctions such as distinctions between numbers people have counted and numbers they haven’t. But mathematical explanations often don’t take any notice at all of how high you have counted. They are not limited by parochial facts of history like that. Likewise for the laws of physics. Nobody has seen the inside of a star, but astronomers know lots about the inside of stars anyway.

In critical rationalism nothing is inherently beyond our understanding and this is inseparable from the notion that no idea is beyond criticism. There is no place where you have to stop questioning something because it is a primitive. So your knowledge of primitives and of everything else is not limited by arbitrary barriers.


Alan Forrester at 11:22 PM on November 3, 2016 | #7369 | reply | quote

Three questions.

Trying question 1 again: Say you have a piece of evidence and 10 ideas which contradict each other but do not contradict the evidence. Which are positively supported (and how much?) by the evidence? Why those? How do you decide? What do you look for to decide which non-contradicted ideas are supported and which aren’t? (I know you can first look at contradiction. The ideas contradicted by the evidence are not supported by it. But then you still need some more steps, some more stuff to judge by.)

Question 2: For early concept formation, which similarities and differences are used? If you see 5000 similarities, which ones do you make concepts out of?

Question 3: ITOE says:

> The three persons are three individuals who differ in every particular respect and may not possess a single identical characteristic (not even their fingerprints).

I think How We Know has a similar comment. What about characteristics like number-of-fingers or height-rounded-to-the-nearest-inch? Why are all(?) definable characteristics with integer values being excluded from consideration as characteristics? I’m also concerned with boolean characteristics, e.g. the characteristic of facing the door or not facing the door.


curi at 8:46 AM on November 4, 2016 | #7373 | reply | quote

Skepticism and fallibilism differences.

Skepticism says knowledge is impossible.

Fallibilism says infallible knowledge is impossible. That is, it’s impossible to get knowledge which offers a 100% omniscient guarantee against error. One can’t get knowledge that is the final, perfect truth that could never be improved in the future.

Objectivism is a fallibilist philosophy which says omniscience is not the standard of knowledge. However, while Rand did write this, she didn’t stress it. The combination of an infallibilist culture (and some Objectivist terminology that sounds infallibilist), and no heavy emphasis on fallibilism from Rand, has led to many Objectivists trying to make some exceptions to the principle of fallibilism and put arbitrary limits on it. Some Objectivists claim we achieve omniscient infallibility in some special cases such as dealing with easy stuff, without attempting to address the regress argument.

Skeptics in general are disappointed infallibilists. They wanted omniscience and, failing to find a way to get it, concluded that we know nothing.

Here are two complications:

Objectivism talks about contextual certainty. This means if you specify a non-ambiguous question involving the current context. Stuff like: “Given what we know today, and a short amount of time to make a decision, what should we decide?” The answer to this question will not change with future discoveries because it was a question about what to do before those future discoveries were made. However, one can be mistaken about the right decision to make today given limited information. There’s no way to 100% guarantee any of your decisions are contextually correct. There’s nothing you can do that makes it impossible to make a mistake. (That’s fine! The possibility of being mistaken doesn’t mean you don’t know what you’re talking about or that you should hesitate or doubt yourself. It has some implications, but nothing awful like those.)

The second complication is people sneak stuff into the premises. They say unclear stuff that means, “If I think about the matter, in context, and don’t make any mistakes then I’ll have a contextually perfect conclusion that can’t be mistaken.” There are lots of ways to be sneaky about this so people don’t realize what the trick is. One trick is to use a word like “proper” as a synonym for not making any mistakes. E.g.: “If you properly think about the matter, then you can’t be mistaken.” Then when I reply pointing out ways mistakes are possible, they say those don’t count as proper thinking since making a mistake is improper. OK, sure, but there’s no way to ever know that a premise like that (about not making any mistakes) is true in reality in a real situation. There’s no actions you can take in reality that will 100% guarantee no mistakes were made about something. That would require omniscience.

Our lack of omniscience shouldn’t upset or disturb anyone. But it does. People spend a lot of time trying to claim some partial omniscience (infallibilists) or saying how weak and pitiful non-omniscience makes us (skeptics).


curi at 9:05 AM on November 4, 2016 | #7374 | reply | quote

2 short replies

One-line summary: I’m trying to ask about exact low level issues.

> By “10 ideas which contradict each other but do not contradict the evidence,” I assume you mean 10 different hypotheses that might explain some phenomenon.

Yes.

But the intended context of my question is the fundamentals of epistemology. Scientists use intelligent thought to figure it out, and there are high level methods for them to use. The issue I’m trying to ask about is stuff like how does intelligence work in the first place? It’s a lower level issue.

One-line summary: I’m not attacking people.

I’m not saying these are deficiencies, I’m saying they are facts to be taken into account. I’m not attacking people, I’m analyzing how people work.

Popper loathed Heidegger and Hegel for what it’s worth.


curi at 9:48 AM on November 4, 2016 | #7376 | reply | quote

2 more

One-line summary: Sense organs don’t provide any infallibility.

> Why would anything in these examples result in the senses misleading us?

They mislead you if you think you directly, instantly, accurately perceive reality.

They don’t mislead you if you understand how your senses work.

I don’t think I’m saying anything remotely original here, nor anything about being led astray (senses don’t lead people, people lead people who use senses). I don’t know why you’re making a big deal out of this topic. I think it’s because you want to misuse “senses follow the laws of physics” (so error is inapplicable) to try to get some infallibility into your philosophy. Your understanding of your sense organs (just like your camera or scale) is fallible, and people do make mistakes about this stuff. Fallibility isn’t going anywhere.

--------

You have to provide some clue that a book has value, that makes sense from my current perspective, before I read it. You are unable or unwilling to criticize BoI ch2 specifically and don’t seem interested in discussion. That’s your choice.

I read the material about Gibson in How We Know. You haven’t made clear to me anything I’ll learn by reading Gibson’s book. I’ve seen no indication Gibson has some good argument that contradicts me. If you could give a very short statement of something I believe, along with a very abbreviated criticism from Gibson, that would be one way to proceed. (I don’t think you know what I believe, and therefore you aren’t in a position to say what I ought to read.)


curi at 11:16 AM on November 4, 2016 | #7377 | reply | quote

HB reply

One-line summary: Your positions can be questioned, and are being questioned.

> I think that it is unquestionable that counting is a simple operation.

If you were more of a fallibilist, you wouldn’t think any of your ideas are unquestionable.

This is not the bad argument you keep bringing up where someone says, “You could be wrong, so you don’t know.”

I gave an argument showing the complexity of doing seemingly simple math. You haven’t provided a simpler method of calculation in physical reality, instead of just sorta assume intelligence as a premise and disregard all the complexity involved. Underlying this we seem to have some major disagreements about premises underlying this topic.

That’s fine as far as it goes. To then call the matter unquestionable, rather than work on engaging with the questioning arguments about it (or say you aren’t interested), is a refusal to think.


curi at 11:26 AM on November 4, 2016 | #7378 | reply | quote

HB keeps evaluating things I say using premises i deny

One-line summary: You don’t understand Critical Rationalism.

> This position is based on much knowledge that one knows to be (and treats implicitly as) infallible, final, perfect truth. It assumes that there is such a thing as error, for example. What is error? An idea in consciousness that contradicts the facts of reality. Thus we have the “infallible” recognition that one is conscious, that contradictions don’t exist, and that existence does exist.

> Without the facts of existence, identity, and consciousness, there could be no such thing as error.

You aren’t understanding what our position is. Our position on what is error, what exists, whether contradictions exist, and everything else, is open to critical questioning. We hold our positions on these matters as knowledge, but not as infallible or as special axioms. None of my positions are held as guaranteed never to be improved, refuted, changed. We have a principled view that doesn’t rely on exceptions and special categories in the way you’re used to.

So, no, there aren’t any infallibilist assumptions here. Just regular premises. Any of them could be elaborated on and debated.

> This whole slew of tens of thousands of words his been the attempt of me and others to deal with the regress argument.

I don’t know what you mean. You haven’t been saying things like, “The regress argument says X, Y, Z. The mistake is Z, which is a mistake because W.” I guess you mean that you’ve been indirectly trying to address the regress argument by, e.g., debating claims you would then use in your argument about it.

The thing I can most recognize as a potential answer to the regress argument is claims about self-evidency and axioms, which I regard as irrational appeals claims of foundationalist authority for some ideas. I don’t regard making exceptions to be a solution to the regress argument. If you do, I don’t think you’ve clearly and directly said so.

Popper’s position on regress is that some positions are refuted by regress arguments, but not Critical Rationalism. The regress argument is correct — there’s no direct solution — but it doesn’t work against all epistemologies. The regress argument doesn’t have a mistake, but you can work around it.

> 2. You can’t guarantee that you reached your decision rationally. That’s false. You can and had damn well better be sure you reached your decision rationally.

I wasn’t conflating issues. I agree that your #1 is true. And I’m claiming this thing, #2, is true. I guess you’re aware that sometimes people think they reached a conclusion rationally, but didn’t. And any kind of double checking won’t offer a guarantee due to regress (double checking the double check, etc). So this is an example case where the regress argument does work. (Accepting the regress argument doesn’t lead to skepticism because not all approaches to how we know run into the regress problem, only some do.)


curi at 12:25 PM on November 4, 2016 | #7379 | reply | quote

trying again to ask HB about the problems in epistemology that CR solves and Oism doesn't

One-line summary: I’m asking about precise fundamentals of epistemology.

Then how do you pick which hypotheses to consider and which to exclude from consideration? And also pick a smaller number, say 4 hypotheses. Then I have the same question as before. You have a set of evidence, and none of it contradicts any of the 4 hypotheses. How do you use the evidence to differentiate and select between the hypotheses? What relationship between evidence and hypothesis is there besides non-contradiction? You say there is a different relationship called support, but what are the details of it?

And what’s the method of deciding how much positive support? I was asking about methods.

> Use common sense.

I am trying to ask about the fundamentals of epistemology, and you say use common sense. That’s a hierarchy error. Explaining how common sense (and the rest of intelligence) works in terms of prior, lower-level stuff is the point here. Intelligence can’t work by common sense. Common sense can’t be part of the explanation of how common sense works. That’d be circular.

More broadly, I think you don’t understand (or will deny) that there are infinite patterns in the world, and selecting which to pay attention to is one of the main problems in epistemology. You seem to take for granted some selection between many, many things, and also to badly underestimate how many there are. But you can’t give precise means of doing it (Critical Rationalism can).

Will you accept there are far more wrong ideas than right ones? Far more unimportant patterns than important ones? I don’t need infinity to make the basic point here about selecting the right patterns/similarities to make concepts out of being hard and needing a method (and that method can’t involve intelligence, because what we’re doing right now is talking about how intelligence works).

How do you pick the important patterns (e.g. types of similarity) out of the many more unimportant patterns? How does a child do that? I agree with you that people can do it somehow (no skepticism).

> Because binary characteristics–ones that are either present or absent, like even or odd for numbers, present no difficulty. (I discuss the wider problem of “quantized” aspects of things in my book.) The problem of concepts comes up when there is more or less continuous variation in measurement or degree.

I don’t understand how to put this together your other comments saying infinity doesn’t exist, real numbers don’t exist, and very big numbers don’t exist.

Before you wanted to focus on small integers.

But here you say small integers are easy, forget about them, continuous variation (real numbers!?) is where to focus.

But I think the answer to my question is you do accept the existence of characteristics with integer or boolean evaluations. OK, great.


curi at 12:47 PM on November 4, 2016 | #7380 | reply | quote

don't know what Evan's point about perception is; presumably infallibilism

One-line summary: Our empirical knowledge, e.g. that there’s a dog over there, is fallible. This is fine as long as you have an error-correcting epistemology, but is not fine if your epistemology has fundamental problems handling error and requires some infallible foundations or other non-error-correcting part.

Many people’s eyes are inaccurate, so they wear glasses.

If you want to say “eyes do what they do”, that’s like saying that a bathroom scale isn’t inaccurate if it says you’re 20 lbs lighter than you are, it’s just obeying the laws of physics.

Yes, if you know what’s going on you can often use inaccurate eyes (e.g. with glasses) or scales (e.g. by adding an extra 20).

Some measuring devices are actually broken enough they aren’t useful even if you know what’s broken about them.

> Why is it perception, rather than inference, which is considered fallible?

It’s all fallible.

If you want to say no matter how badly your eyes malfunction they are just mechanical physical objects, and all the errors are in your mind, that’s fine. In that terminology, then all perception is interpreted by fallible ideas, so there’s still no infallibility.

Anyway, what’s your point?


curi at 12:56 PM on November 4, 2016 | #7381 | reply | quote

Rand quotes making an exception to fallibilism?

> HB and I agree that any statement in the form “A is A” is certainly true. It cannot be mistaken.

Rand advocates fallibilism, at least in some form. It’s in her books.

Is it a limited form of fallibilism with exceptions? HB and Betsy Speicher seem to think so.

Can you provide some clear quotes, from Ayn Rand personally, saying the principle of fallibilism is limited and has some exceptions? And saying what the exceptions are and how infallibility (e.g. Betsy’s “cannot be mistaken”) is achieved.

PS The process of reducing a conclusion to “A is A” is itself fallible. (This one instance of the regress argument.)


curi at 1:14 PM on November 4, 2016 | #7382 | reply | quote

the different Oists don't keep their story straight on infallibility

> Fallibilism holds that man can NEVER be certain about ANYTHING.

This depends on the meaning of “certain”, which I’ve been assured above (and read from Rand) merely means having knowledge and doesn’t require infallible omniscience.

The question about an Ayn Rand quote is still open. Be careful with quotes which use controversial terminology like “certain”.

If you read above, there’s extensive claims by HB and others that certainty allows for the metaphysical possibility of error. In my understanding, HB considers “A is A” and “2 + 3 = 5” to be beyond certainty, to be something else more like: “It’s infallible. I can 100% guarantee that an omniscient God would agree with it. It’s literally impossible that it could be mistaken. At no future date could it ever be reconsidered or improved.” But when HB says he’s certain about something, in general, I don’t think he means that really strongly infallible statement.


curi at 1:53 PM on November 4, 2016 | #7384 | reply | quote

Evan makes more sense than HB

One-line summary: Double checking we agree on perception, and proceeding.

OK, sure. I agree that perception is infallible in the sense you mean, though I’d prefer to say fallibility or infallibility is inapplicable.

I’m also fine with other terminology which says that perception is fallible. In this case, as often happens, there’s multiple ways to correctly talk about the same thing.

Let’s be careful though. “There is a dog over there” counts as a “conclusion” in your terminology (the dog isn’t perceived, it’s a conclusion from one’s actual perceptions), right? Whereas some other people here would say you perceive the dog and it isn’t a conclusion.

And earlier you said perception is direct. But now I take you to mean we have direct perception of photons (not dogs). Is that what you mean? I don’t think that’s normal terminology for this that others are using, but I do see some merit in it (you want to say we have direct contact with reality, which I appreciate) and I don’t mind it.

So, Evan, if we agree on this, what if anything do you disagree about? Earlier you wrote:

> Within Objectivism, perception is necessary because positively supporting proofs have to start somewhere–from some base of knowledge that can be accepted as self-evident (not requiring evidence to support beyond itself).

I think we have some disagreements here. One way to proceed is you could give your answers to my questions in the induction thread which I’ve been discussing with HB. Another thing I’d want to know about this is if you think self-evidence is fallible? That is, are you just trying to fallibly solve the regress problem, or are you going to say that stuff like “2+3=5” and “A is A” cannot possibly be mistaken as others have been saying?


curi at 2:05 PM on November 4, 2016 | #7385 | reply | quote

going ok with evan

One-line summary: Not enough physics. Plus induction questions.

> Representationalism holds that we perceive the world by examining the percepts generated by our sense organs. What actually occurs is that our percepts are a form of us being aware of things.

Both sides of this debate strike me as too vague. (I had the same impression reading about this in How We Know.)

Where’s the physics? Photons pick up information from dogs and carry it to our eyes where we measure some of that information, process it, and send it to the brain. Our mind gets access to this data and we use the data in our thinking.

Whether you’re consciously aware of sense data involves some choice. You can tune things out. I don’t get the purpose of the claim about being aware.

I agree that the following idea is wrong: your sense organs create a picture of a dog in your head, which you then look at in your head. That leads straight to regress.

Some of the key points, to me, are:

1) There are no infallible foundations to build knowledge on. Perception doesn’t do this. (People want foundations to address the regress problem, which I think should be addressed without foundations of any kind. The critical method doesn’t lead to regress the way justificationist views do.)

2) All thinking should use methods that allow for error correction, with no exceptions. This excludes e.g. calling one of your claims unquestionable. It means you do the same thing with “A is A” that you do with other ideas. You treat it the same.

We might agree about this.

BTW I find when people are most sure about stuff, and won’t entertain criticism of it, they’re more often making some mistakes. (At least regarding the topic overall if not regarding one exact claim. For example, I don’t dispute 2+3=5 specifically but I do dispute HB’s thinking about integers. And I don’t dispute A is A, but I do dispute some of the infallibilist attitudes involved in how people think about A is A.) If they’d stick with normal methods, instead of having special exceptions, this wouldn’t happen. And what’s the point of the special methods? There’s nothing to gain by shielding a true idea from criticism!

I am curious how you address the problems of induction, e.g. which pattern(s) do you induce or form a concept from out of the infinite patterns that fit your data? And which ideas do you claim a set of evidence supports out of the infinity of ideas which are compatible (non-contradictory) with that evidence?


curi at 3:03 PM on November 4, 2016 | #7391 | reply | quote

evan reply

One-line summary: Let’s discuss how knowledge is created.

> I believe that you have argued in the past that all errors are of unanticipated source before they are identified, but I challenge this idea.

I don’t think I have. I would say some errors are unanticipated and some are anticipated well. (E.g. I may proofread an essay anticipating some typos, and find some typos.)

Because reason deals with error in a general purpose way, I don’t see the point of trying to restrict the possible categories of error, unless one wants to stop using reason for something. I do see the point of trying to anticipate errors in order to decide where to put resources, which I have no objection to.

> You have a system which would even work outside of reality (for all I know).

I don’t think it would. Thinking is a physical process. My system deals with thinking. Evolution is a physical process. My system has various connections to our physics and our physical reality. I don’t know how to deal with anything else besides reality.

> Objectivism was made to address the needs of a particular context: Man, in his life in reality. You would have to offer some very compelling arguments before I would even begin to consider other contexts as being worth consideration. What’s at stake here?

Some Objectivist epistemology positions are false. There’s a mix of logical errors, misunderstandings, incompleteness, vagueness, etc. This should be fixed. Critical Rationalism has solutions. Getting stuff right should be enough to care about, but there are extra benefits like better integration with physics and computation (which means e.g. being considerably closer to building an AI, though still not close).

This isn’t primarily about fallibility. You can be a fallibilist inductivist. I don’t see any serious incompatibility between fallibility and Objectivism. Rather it’s individual Objectivists from an infallibilist culture who object to fallibility.

A more interesting issue is that induction doesn’t work. Like at all. No one has ever learned anything by induction. It’s literally impossible. So inductivists misunderstand their mind and learning in a pretty serious way. (Unless you redefine the word “induction” heavily enough, which some people do attempt. For example, some people claim that induction refers to any process of getting a generalization, by any means whatsoever, including by using Critical Rationalism. Redefining “induction” to dodge criticism is extremely common.)

On a related note, positive support is a myth with logical problems.

To begin discussing this, I suggest you answer my questions (either in the induction thread or above) and also tell me what version of induction and support you believe is correct. If it’s written down somewhere that’d be great, just let me know which book(s) you agree with. What I want is a full account (including concept formation) which starts at the start rather than presupposing stuff like common sense or intelligence. This will provide a target for criticism (in order to point out what doesn’t work) that’s better than me picking some version of induction to criticize (that you may not have thought worked anyway). If you’d prefer to write your own, I don’t mind.


curi at 4:03 PM on November 4, 2016 | #7395 | reply | quote

they have no clue how intelligence works, yet think they are solving epistemology by saying how to use intelligence

One-line summary: Evolution is the only known way that new knowledge is created.

> The search for “criteria” sounds like a Rationalist quest for formulas that can be applied mindlessly. The child does not need, and could not understand, criteria.

You need a mindless (non-intelligent) way that intelligence works. Presupposing an intelligent mind that has a way to know, when trying to explain how we know, is circular. (“We know by using our intelligence to know things.” doesn’t answer the fundamentals of epistemology.)

Only one good idea has ever been discovered to answer this problem, which is evolution.


curi at 5:08 PM on November 4, 2016 | #7397 | reply | quote

they pretend to have all the details like concept formation but they're actually just skipping a lot of the low level

One-line summary: You guys focus on high level epistemology, I do both high and low level.

I think a lot of our disagreement is this:

I think epistemology is about how intelligence works, but you guys think epistemology is about how to use the intelligence you have.

Me: How do we learn? How does it actually work? Let’s break it down until we understand all the details. Let’s connect it to other fields like physics and computation. Let’s understand the relationship between the physical objects involved in learning and learning.

You guys: Given we have this wonderful mind that is able to learn somehow, what are some good tips and methods for how to use it well in daily life?

You are interested in a higher level thing. We have some agreement and some disagreement about the higher level thing. I have the lower level part and have integrated the higher level part with it, and you guys don’t have the lower level part so your positions strike me as far too vague (and also some are false since the somehow for how intelligence works matches some but not all of your speculations.)

You guys have some substantial willingness to treat intelligence, and some other things, as givens, whereas I want to know how they work (and do know a lot about it) and work out implications from the lower level in order to be more precise and correct about the higher level.


curi at 5:14 PM on November 4, 2016 | #7398 | reply | quote

evan reply

One-line summary: OK. Proceeding.

I don’t remember the details but OK I’ll go by your summary. Sounds reasonable. My perspective:

How does intelligence work if you have no answers to how to choose between the infinite patterns of the world to make concepts about, how to select which matter? How does positive support work if you have no answer to which of the infinite non-contradicting-the-evidence ideas are supported (and how much?) and which aren’t? (If all non-contradicting ideas are equally supported, then positive support no longer has a logical problem but isn’t useful and becomes a more convoluted way to talk about criticism.)

You don’t necessarily need an algorithmic answer, but it does have to work with only basic building blocks that can build up intelligence, and not include any common sense, intelligent decision making, etc. I’m unclear on why you think an answer to how intelligence works (of that type? or in general?) is unnecessary.

You don’t need to answer this stuff to validate that we know things. You can tell we know things because iPhones exist and work. But you do need to address these issues to say specifically that positive support is part of how we know. Or do you have a counter argument to that? How would you know positive support works and is used, if you can’t answer some critical questions about it?

It sounds to me like you don’t know the answers to these questions. OK. Then what?

If they were unsolved problems, then it’d be up to you to work on solving them or do other stuff instead. (I highly recommend being a (non-academic) philosopher and doing work in the field, but not everyone is. Oh well. Objectivism has a lot of great stuff to say about why philosophy is important so I won’t go into it now.)

But these fundamental epistemology issues aren’t unsolved problems. Solutions have already been discovered, and they lead to some change of perspective on how reason and learning work. So the solutions are worth learning and accepting (or criticizing and rejecting, if you can.) Is there something you think my system is wrong about?

BTW I will readily acknowledge that Popper doesn’t tell you everything you’ll want to know about philosophy, life, etc. I learned Popper first and went looking for more. Popper made some important contributions to philosophy, and some bad ones, and left some other things unanswered. One needs Objectivism too. And there are some other philosophers who had some ideas worth learning too. Objectivism is the most important overall, and then Critical Rationalism.

I don’t know if you think CR is mistaken or not important enough to put time into. I don’t know if you think not knowing how intelligence works is a big deal, or minor because, hey, your thinking seems to work fine. I’m guessing we do have some important disagreements about some of this. I think we got interrupted by the forum closing down last time.

One broad perspective difference we may have is about the acceptability of flawed ideas. My position is don’t compromise at all, don’t accept any flaws. One single criticism you can’t deal with means you need a new idea which isn’t refuted (which may be very similar to the previous idea if you can come up with one that works, which sometimes you can and sometimes you can’t.) Most people don’t agree with that, though it is kinda Objectivism friendly.


curi at 5:15 PM on November 4, 2016 | #7399 | reply | quote

Evolution underlies intelligence.

Did you get anywhere with DD’s books? Got any criticism? Questions? Misunderstandings are very common without a lot of discussion to clear things up.

> He was not at all satisfied with this, equating it with “I’ll know it when I see it” which, when you think about it, is precisely my position.

How do you know it when you see it? Using intelligent thought. And how does intelligent thought work? Not by an “I know stuff when I see it” method, which would be circular. The intelligent thought which lets you know it when you see it works by guesses and criticism (evolution). You’re also, by thinking about positive support, adding some convoluted layers on your thinking, which include some mistakes, and that makes it harder to get the right answers. And where do those layers come from? You think they come from Objectivism but your position has a great deal in common with non-Objectivists. I think you’re accepting a bunch of philosophical misconceptions that are extremely dominant in our culture and which contain some knowledge that works to blind the holder to the problems they cause.


curi at 5:33 PM on November 4, 2016 | #7400 | reply | quote

person made several posts and was kinda lost so i hope this helps

One-line summary: I was going into details to show the details are complicated.

Here’s my perspective:

I like set theory. I don’t have a criticism of it. But I do agree with HB (I think) that it isn’t super simple and trivial.

The point of my post (the first one in this topic) is that people think addition is simple. They have a strong intuition that addition is simple. But if you look at how a computer does addition, it’s complicated. The way people do addition isn’t even known in much detail, but is considerably more complicated.

Addition looks simple when you presuppose intelligence and ignore all the details going on beneath your conscious mind. Most of the complexity is at a lower level. It still matters to the possibility of error. Your brain is a computer. (I still have no idea why HB is denying that, he didn’t explain his position on that particular issue when I asked.) You do math computations in a much more complex way than the addition procedure I went over.

Many programmers mostly ignore hardware and just write code and think abstractly. And that’s fine. However, any software or hardware bug can result in an incorrect software output. The hardware is still there.

The point of my post was to give people a better intuitive understanding of how 2+3 isn’t as simple as it sounds. That you didn’t follow much of my post (e.g. you mixed up base 8 numbers with 8 digit numbers) is fine and demonstrates my point (that it’s tricky stuff). Most people don’t know low level details of computers, and don’t need to. I just figured talking about it could help show addition is complicated.

There are philosophical arguments about fallibility in the discussions about Popper, but my goal here was to give an illustration. Replies which are basically summary overviews of how to do math, which appear simpler on account of leaving out most of the details, don’t refute my illustration.

BTW, using set theory would not let a computer do addition more easily.

Anyway hopefully that clarifies what’s going on.


curi at 5:45 PM on November 4, 2016 | #7401 | reply | quote

#7400

> I think you’re accepting a bunch of philosophical misconceptions that are extremely dominant in our culture and which contain some knowledge that works to blind the holder to the problems they cause.

You mean static memes right? Do you have some specifics/examples in mind?


Anonymous at 5:16 AM on November 5, 2016 | #7405 | reply | quote

none of the bystanders actually have a clue what Oism says or how to defend it or even what the questions at issue are

One-line summary: I'm asking about the fundamentals of thinking, not life advice for intelligent adults.

I'm asking about induction and concept formation. You're saying first rationally evaluate your values, goals and purpose. But how do you do that first, without yet doing any induction or concept-formation? You seem to disagree with the hierarchy presented in How We Know, or to have misunderstood what I'm trying to ask about.


curi at 7:43 AM on November 5, 2016 | #7406 | reply | quote

Thoughts on Thoughts on We The Living

I wrote some thoughts on _We The Living_ in 2009. I present them now along with new comments in italics. Being able to review your thoughts is one of the great advantages of writing them down. You can see ways you changed your mind and can relearn things you forgot.

_We The Living_ (WTL) by Ayn Rand is a very good book. One always hears about _Atlas Shrugged_ and _The Fountainhead_. I think those are better, but only by a small margin rather than a large one. WTL deserves attention.

*I still think WTL is really underrated and neglected. However, I think the gap to AS and FH is bigger than I said here. I'm judging this by how well they hold up to rereading. The more interesting content, the more times you can reread without getting bored. I reread these books when I find it interesting and think I'm learning something more. I'm now up to 10 readings of AS and 10 of FH, but only 4 of WTL.*

In the introduction it says that Kira is better than Leo or Andrei, and asks which of Andrei or Leo is better. It says Ayn Rand prefers Leo, and that she thinks Leo would be like Francisco D'Anconia if he lived in the USA instead of in Russia.

I disagree. I prefer Andrei. Let's start with Leo: Leo has already largely given up when the book starts, and he gets worse as the book continues. I don't see that Leo did anything impressive in the entire book. I'd note that Kira is attracted to him due to his appearance. Admittedly, in Rand's worlds appearance is a direct indicator of character including heroism, so perhaps when she described his appearance she intended to be telling the reader that he was just like Francisco. But I prefer to judge people by their thoughts and actions, and while Leo is a decent guy sometimes, he never does anything heroic. The closest he comes is buying passage on the boat to try to escape from Russia.

*Under the circumstances, Leo's attempt to escape Russia on the boat ought to count as heroic!*

Andrei improves as the book progresses. He learns things. I would say he is the only character in the book who learns much of anything good (some characters learn how to talk like a loyal communist, or other mundane skills). Kira was evidently born heroic. As good as she is, she already had all her merit when the book starts; I think Rand sees having to change as a bit of a weakness, rather than seeing learning as a strength.

*This is too harsh but I do still think Rand's heroes don't do much learning in her novels. That helps make the books be about heroes and good ideas, but learning is a big part of life too. And it leaves many readers wondering how to become like a Kira, Roark or Galt in the first place.*

My favorite part of WTL is when Kira confesses to Andrei that she slept with him to get money for Leo's medical treatment and that she didn't love him. In particular I like Andrei's reaction. He does not get angry. He does not whine about how his lover betrayed him, and his heart is broken, and all the stuff nearly everyone would say. That alone is wonderful. But Andrei does considerably better than that. He reacts by stopping to think. He doesn't say anything except "I didn't know" until he's thought about it. He's calm and collected even as Kira continues her mean rant. That's great too. But then the really amazing thing is that within minutes of finding all this out, and with Kira fully unapologetic, he has not only forgiven her, but praised her for doing it, said he would have done the same thing, and said it vastly raised his opinion of her. When he found out she was living and sleeping with Leo, what bothered him was not the betrayal but that the best explanation he could think of involved her being a bad person.

*I still especially like this.*

Sidenote: Why would that indicate to Andrei that Kira is a bad person? Andrei considers Leo a bad person, so why would Kira want to be involved with him? And also, why would Kira want Andrei's money? Why would she want to take advantage of him? Is she just a whore and a sort of thief? That is incompatible with being heroic.

So when Andrei finds out the truth, that Kira had good reasons, he realizes she was in fact a better person than he'd ever known. She did something very hard, but also important. She epitomizes the heroic values he liked about her even more for doing it. And Andrei recognizes all this right away and is glad about it. That is in many ways even harder than what Kira did. Think about it. A lot of people could lead a double life if they were motivated enough. Nothing about it is really too complicated. But what Andrei did, staying calm and reacting to emotional news in a rational way, most people couldn't even begin to do that. They have no idea how to do it, or even how to start learning to do it.

*Note this is only a narrow comparison, not an overall comparison of the characters.*

To sum up: Andrei has this very exceptional moment, and he is the character who learns and improves over the course of the book. That's why I prefer him to Leo. By contrast, Leo lets his life get worse and worse until he gives up and no longer wants to try or think.

The worst thing about Andrei was his suicide. He could have remained friends with Kira, and looked for ways to turn his life around, such as going abroad (even without Kira), or helping anti-communist resistance. Note that if he'd been alive longer, he would have been around when Leo left and Kira decided to escape, and she would have accepted his offer to escape together at that point.

Leo has a lot of serious flaws. He despairs, he doesn't want to think, he wastes money, he turns to crime knowing he's putting his life at serious risk, he doesn't value his life, he befriends bad people, and he mistreats Kira. Leo has a different reaction than Andrei when he finds out about Kira's double life. Andrei reacts heroically. Leo reacts despicably. Leo thinks worse of Kira, and then says he's glad for her to be worse. The worse a person Kira is, the better, is Leo's view. He doesn't want there to be any good in the world, so when he turns his back on good he's less guilty. That's just terrible.

On to Kira. She fails to improve things, but she never gives up, so it's alright. Actually Kira does improve her life in one major way. She forms a relationship with Andrei, and then helps him improve. The more he improves, the better a friend she has in her life. Unfortunately she doesn't recognize this. By the way, I think she should accepted Andrei's offer to go abroad. It would have improved her life! She only stayed for Leo. Self-sacrifice is bad. I know she wanted Leo in her life for her own sake, but he wasn't making her life wonderful, and she should have noticed that and taken the superior opportunity. Note that it would have quite possibly saved both her own life and Andrei's life.

One of the great parts about Kira is the stuff she doesn't notice. Near the beginning of the book her family complains about their poor clothes and poor food. Kira comments that she hadn't noticed. Kira does not think of hardships just like when Roark comments that he doesn't think about Toohey. Kira instead focusses on pursuing her goals and living her life, which is great.

I like Kira's escape attempt because it was her pursuing her values. I like her interest in engineering. I like how she insists on living life her own way. For example, she enrolls in engineering classes against her family's wishes, and she goes to live with Leo even though her family will disown her for it (they forgive her when they are hungry and she has more money than them). By the way, I also like Vasili Ivanovitch, the relative who sells all his possessions but refuses to get a Soviet job.

Kira demonstrates her strength and perseverance by her escape attempt, by maintaining her double life, by never giving up, and by making a great effort to get and keep a job, to wait in all the lines, and so on. Those are the things she has to do to continue her life, so she does them, and she doesn't complain incessantly or turn her mind off or let it destroy her spirit, she just does it and keeps living like a full person, almost like a free person. She also demonstrates it strikingly when Leo leaves. She chooses not to tell him why she slept with Andrei, or where the money for his medicine really came from. A lot of people would be angry with Leo and tell him out of spite. A lot of people would tell him and say it was the truth as an excuse. A lot of people would tell him without even thinking about it first. But Kira is better than that. She judges that Leo is lost to her, so there is no point in telling him. She further judges that Leo does not want to know. Not telling people things they don't want to be told is a good policy. It's respectful of their life; it's living by consent.

*That kind of judgment call is hard to make. I'm often optimistic that people will respond rationally to the next argument, the next explanation. Even when I know better than to expect it. I've had many discussions and I know that people rarely come around after some initial bad thinking. But irrationality doesn't fit with my sense of life and doesn't feel intuitive to me.*

Kira stands up to the communists at times. Not in a sacrificial or suicidal way like Sasha (Irina's boyfriend; they are sent to separate camps in Siberia), but only by way of expressing her values and living in the way she wants to. That is nice. Sasha gives up his life for a cause. Kira values her life more than he does. She doesn't want to be a martyr. In one scene Kira considers sacrificing herself to do a good deed. She's in a communist march/parade, and some foreigners are visiting to see Soviet propaganda, and she could run up to them and tell them the truth about Russia. But she thinks of her life with Leo, and doesn't want to give that up, and she puts that ahead of communicating this important truth which has the potential to save every oppressed Russian. Good for her.

*Kira sacrificing herself to try to communicate with the foreigners wouldn't have saved Russia. If the foreigners wanted to know what was going on in Russia, they would have. There was a problem of willful blindness. If Kira ran up and screamed some information to them, she could have easily have been writing off as a "crazy" liar.*


curi at 8:25 AM on November 5, 2016 | #7407 | reply | quote

I like your thoughts on WtL

> *Kira sacrificing herself to try to communicate with the foreigners wouldn't have saved Russia. If the foreigners wanted to know what was going on in Russia, they would have. There was a problem of willful blindness. If Kira ran up and screamed some information to them, she could have easily have been writing off as a "crazy" liar.*

Do you think it's the case too with trying to communicate that mental illness awareness is bad?


Anonymous at 10:52 AM on November 5, 2016 | #7408 | reply | quote

Are Peano’s axioms compatible with the idea of a maximum natural number?

> I’m supportive of the original, 1-based Peano axiomatization.

According to Peano’s axioms, every natural number has a “successor”, that is, another natural number that is one bigger than it. For instance, 6 is the successor of 5.

> [I]t is widely believed that there’s a number like: 10^100^100. There isn’t.

How is this compatible with Peano’s axiom that every natural number has a successor?


Josh Jordan at 6:31 PM on November 5, 2016 | #7409 | reply | quote

The many things one could look at.

> Common sense is an appropriate thing to use to say “You cannot see 5000 similarities.” Do you disagree?

I still disagree but it makes more sense now that you explain there was a typo. That may invalidate some of what I said before (I haven’t checked yet).

> It’s all about knowing what aspects to attend to.

That still reads to me as relying on an intelligent mind (to do that knowing) in order to address a fundamental epistemological problem. Your comment on being creative also reads to me as saying that creativity is part of your answer to the problem of what aspects to attend to, but I think the issue in question involves how creativity works in the first place.

The Popperian answer, in brief outline, involves not starting with perception, that way one has more tools available when one gets to observing. If this problem comes up later on, then at that point there’s more resources available to deal with it.

More later. I wanted to quickly clarify the typo did mislead me and to comment briefly on knowing.

PS I solved the math problem but I don’t know what the point was. I’m guessing there’s some trick to it that I didn’t use.


curi at 7:57 PM on November 5, 2016 | #7410 | reply | quote

Perception, physics, knowledge of reality.

> If you’re saying that our understanding of how to correctly interpret the material provided by the senses depends upon us understanding the underlying physical process, you’ve lost me. We had to learn to use our eyes with incredible reliably before we could study any of the physical processes you describe.

People had a more vague understanding of the relevant physics in the past (and more wrong in a variety of circumstances), not no understanding. That doesn’t make physics uninvolved.

For example, people thought vision was instant, when actually it’s very fast at distances dealt with on Earth. That approximation worked fine for a lot of purposes.

In some cases people’s ideas about their senses ran into problems, and were refined. E.g. I guess people figured out vision isn’t instant at some point where believing it was instant was clashing with their scientific pursuits. I don’t know this history well.

Another example is people long ago understood something along the lines of smell involving air going into their nose. You don’t have to know a lot of physics to know about plugging your nose. But that does involve some understanding of the physical objects involved and the mechanism of smell. You don’t smell with your eyes, which many people must have figured out when they closed their eyes but could still smell things.

Physics is so great because it’s one of the most successful fields in human history and is also fundamental because it studies the physical world and the properties of physical objects from eyes to computers (including brains) to dogs to rocks to telescopes to trees, etc, etc, etc. When you do any project in your life, at some some approximate understanding of physics is involved. When you sit on a chair, physics is involved. Chairs are physical objects. You are a physical object. You understand the chair will hold you up instead of collapse or dissolve or fly away. Books and computer screens are physical objects and if you didn’t understand things like stable objects that persist over time then you’d have trouble.

Physics, being both sophisticated knowledge and frequently relevant knowledge, is a valuable thing to work out connections with. If you’re dealing with physical objects in any way then you need to make claims that, at least, don’t violate the laws of physics. That requires some at least approximate understanding of the physics involved in order to check if the laws of physics are being violated. Doing stuff like this (there are many other important checks too) helps us find mistakes we make and get better ideas.

> Representationalism reduces to the idea that all we can ever have knowledge of is appearances. There’s the really real world the way it really is, and then there’s the way the world appears to us, and without being able to step outside our minds and compare the two, we cannot know the true truth of really real reality. It’s a fundamental, philosophical level perspective on the senses–all senses, anywhere, of anything.

For perception, all we have is perception of is appearances (not dogs directly). But that doesn’t stop us from knowing the real world. Even if appearances were quite misleading that’s not an insurmountable problem to deal with. But appearances aren’t especially misleading. The world isn’t out to get us or trick us. (Nor is the world designed to make things obvious or trivial for us.)

> What do you think about that fundamental question? Can we ever grasp facts? Existence exists. Do we know that in a direct way, or in an incomplete, indistinct, approximate way? I suppose it depends upon what you mean by “know.”

We can know facts. I don’t know precisely enough what you mean by “grasp” to answer that one.

I’d say we don’t know in a direct way or an incomplete, indistinct way. I think it’s a false dichotomy.

We do see an appearance of a dog, not a dog directly. We see photons with information including color and shape. It looks like a dog. Is it a dog? Often, yes. Sometimes it’s a photo or video. Sometimes it’s taxidermy. Sometimes it’s a wolf with makeup. Interpretation is required. One has to think about explanations of why there appears to be a dog there. The appearance of the dog is a piece of evidence, and one needs to come up with an explanation which accounts for the evidence and also fits with other stuff you know.

Human knowledge is always somewhat approximate and incomplete. Or in other words, we can always learn more if we want to. We can always improve more, if we want. But there are no limits how how precise our knowledge can be. There’s nothing fundamental stopping us from improving our knowledge in an unbounded way. Pick any goal and it can be more complete, distinct and exact than (unless the goal is misconceived, e.g. our knowledge can’t be infallibly exact.)


curi at 7:45 AM on November 6, 2016 | #7414 | reply | quote

My review of Ayn Rand Contra Human Nature

http://curi.us/1578-critical-review-of-ayn-rand-contra-human-nature

This is a long and detailed review of a book critical of Ayn Rand. The review is largely negative. These Rand critics are mistaken and I go over what they say and why it’s mistaken. I also had discussions with them on their website in addition to reading the book, which helped clarify what they think and why.

Because there’s a lot of formatting that won’t copy over, I’m just giving the link without the text of the review. It’s much more readable on my website than a copy/paste would be.

PS Some of them claim to be Popperians, but they don’t understand Popper (or Rand) correctly.


curi at 7:55 AM on November 6, 2016 | #7415 | reply | quote

Whatever field you assign the problem to, it matters and Popper solves it.

> The bottom line from my previous post is: the fact things have a large number of aspects is not an epistemological problem, but a practical one. Epistemology tells you whether or not you have identified an aspect of your subject, whether or not you have attained knowledge, it doesn’t tell you whether you have got the knowledge that will be most helpful in practical action.

Popperian epistemology is rather different than that. It not only deals with how to evaluate knowledge, it says useless information isn’t knowledge.

Whatever field you call it, we care about and address the full problem of how to learn, including figuring out what’s useful to learn and anything else man needs to guide his learning in real life.

I think you’re allowing arbitrariness to count as knowledge, even though it’s useless, and man could not live without a better approach than choosing any patterns, randomly, and creating “knowledge” about those randomly-chosen patterns (the “knowledge” being, more or less, that those patterns exist. It’s only certain patterns which lead anywhere fruitful.)

Popperians don’t have this problem because we don’t claim looking for similarities and differences (patterns) is how you form concepts or learn. Since our epistemology isn’t based on patterns, the infinity of patterns doesn’t cause us trouble. The primary focus on looking for patterns is an inductivist way of thinking.

Popperian epistemology is about evolution creating an ongoing growth of knowledge (which is useful problem-solving ideas — ideas that with an element of design for a purpose — not just any useless information). The role of observation is large but less fundamental. Observation is used in criticism. Ideas which contradict observations (in a context) are rejected. This prevents reality-contradicting thinking. Observation is always selective — you don’t just look at the world (and see infinite patterns and get lost), you have some idea of what you care about and then you look. You try to make relevant observations. (We could go into details on how to observe selectively, but it gets into creativity and stuff, which we don’t have exact answers about. Similar to what others here have said, with the difference being that I’m able to address how intelligent, creative thinking works, without circularity, because the observations come later on.)

As Popper put it: all observation is theory-laden. You need theories first. Raw observation is both impossible (because e.g. our eyes are opinionated — they let us see green but not infrared) and worthless (because there’s infinitely many characteristics and patterns out there that one could observe).

I know HB objects to infinity on principle, but consider the characteristic, “Has N hands, yes or no?” That works fine for integers N from 0 to infinity. And we certainly can evaluate whether Joe has 10^100^100 hands (no), dealing with large N is possible to us. The reason we don’t care about that (arbitrary) characteristic of Joe is that we have theories about what’s important and what sorts of characteristics to look for.

Put another way: the arbitrary cannot be ruled out mechanically by a mindless algorithm. And observing requires selectively focusing on some things and ruling out arbitrary infinities. So we need a mind before we observe. So one’s theory of how minds work can’t make observation primary — and this problem is solved by evolutionary epistemology.

(I think this answers Stephanie Bond’s questions too. If she finds it doesn’t, just let me know and I can explain further.)


curi at 8:16 AM on November 6, 2016 | #7416 | reply | quote

Discussing epistemology.

ET:

> Fallibility is the philosophical theory which says people can make mistakes, even when they think they didn’t, and there is no way to have absolutely certain, final knowledge.

HB:

> That sounds like skepticism. You’d have to define non-absolutely certain, non-final knowledge–something that is ordinarily called “belief” not “knowledge.” Let’s call it “tentative knowledge,” for the purposes of discussion. Then you’d have to defend against the Humean reduction of probability to certainty, which we talked about earlier: for proposition P, it is only tentatively known that P qualifies as tentative knowledge. Then it is only tentatively known that it is tentatively known that P qualifies as knowledge. And so on, with the retreat from certainty increasing with each step.

The traditional conception of knowledge is infallibilist and is impossible. That has led some people to skepticism, and others to a big failed project to save infallibility. It led Popper to understand what knowledge is in a better way.

This new system includes:

Omniscience is not the standard of knowledge.

- Knowledge is what Paley calls the appearance of design or purpose. E.g. watches involve knowledge.

- You don’t have to be absolutely certain your watch will work for it to work. Certainty as a state of mind isn’t needed.

- Errors – or in other words the possibility of improvement – don’t prevent an idea from being knowledge. There is a growth of knowledge as we learn more and more.

- You don’t need – and shouldn’t seek – guarantees about being right.

- Knowledge is contextual both in the way Objectivists talk about and also in the following way: ideas need some kind of purpose to be knowledge. You can think of knowledge as answers to questions. Both the question and the answer are necessary. Answers without questions don’t make sense.

- The same answer can be used on multiple questions, and can be a true answer to some questions and a false answer to other questions.

- Evolution is the only known way that new knowledge can be created.

- Most of the evolution in one’s mind is done before the conscious level, but it’s also fruitful to organize conscious thinking in an evolutionary way (e.g. brainstorming and critical discussion).

- The core aspect of how evolution creates knowledge is error correction. Error correction is also the core aspect of reason.

Now for the regress argument:

> And so on, with the retreat from certainty increasing with each step.

There’s no certainty, so there’s no retreat from certainty. There’s no amount of any other epistemological quantity either. Rather, there’s the boolean evaluation of refuted or non-refuted. There can be no regress with a boolean since there’s no continuum to regress along. And, more directly, saying an idea is tentative or fallible is not a criticism of that idea (doesn’t make it refuted), so that attempt at a regress stops immediately and has no effect.

> On the other hand, if you say that tentative knowledge is the same thing as what Objectivists call certainty, why not just call it certainty?

Because we live in an infallibilist culture (it’s very deeply ingrained) and the word “certainty” is some mix of outright wrong (most uses) and misleading/confusing/unhelpful as terminology (the uses that aren’t actually wrong.)

> We need a concept for, quoting from HWK, “what we can act on . . . without hestitation, without doubt, without needing further deliberation or investigation. [Footnote 148: Any further information that comes along will be useful cognitively, will add to the fabric of knowledge. But, with regard to action, there comes a point at which gaining more information cannot be expected to change one’s action-decision.” [p. 272]

Further information always may change one’s action-decision. You act when you judge it’s better to act than to spend more time deciding.

If you think it may be time to act, then you critically evaluate the idea that you should act now. If you don’t have a criticism of it, then you may act. The possibility of learning more if you delayed action is not a generic criticism of action. Your time matters. You have other things to do. You always have many things you could think more about and have to prioritize what you expect to be most productive. (This prioritization is itself an idea that should be critically considered. And the matter is never closed in a final, permanent way forever – it can be reopened when you have reason to reconsider.)

I think it’s helpful to divide problems/questions into two categories. There are abstract problems which don’t involve a time element and don’t deal with your life directly. For these, you don’t need any stopping points or decision points. You just work on it and later you may work on it some more. There’s an unending, unlimited growth of knowledge. You sometimes are satisfied with a solution and have no expectation of ever reconsidering.

And there are problems relating to human decision and action, which are time sensitive. For these you need to decide when your ideas are good enough to act. How good is good enough is contextual. You have to look at what will happen if you delay more, and what else you could do instead. And you can always come up with a non-refuted idea, in a small amount of time, about how to proceed. Do that. There are some methods for how to do this including the technique of saying, “Given I don’t know X, Y and Z, what should I do?” which can be repeated as necessary with a longer list. That’s a regress of sorts — the answers get less ambitious and easier every step, which is why this doesn’t take too long. This is proper. The less you know, the more reasonable it is to have a minimal, unambitious solution. Another method is for what you do if you have more than one non-refuted idea about how to proceed. In that case you can critically consider what to do. But if you get stuck or get low on time, you can do this: say that all the ideas under consideration fail to address the question of what to do in your current situation. So they’re all refuted. Failing to provide guidance for addressing a rival is a flaw that can be criticized. So you can throw everything out in this way and then proceed with a “Given I don’t know…?” question. These techniques are for answering questions that involve time and resource constraints, but they don’t work in the abstract growth of knowledge case (an idea failing to address a rival doesn’t make it false in general, it just makes it not good enough for the purpose of prompt action.)

I have a lot of writing relating to these topics. Rationally Resolving Conflicts of Ideas is an index post linking to many others. (What to do about conflicting ideas is one of the core problems we’re dealing with.) You can find e.g. a post on the regress problem there.

This stuff is hard to explain. I haven’t written a canonical version yet because I haven’t figured out a great way to explain it. I haven’t had a lot of success convincing people of it yet. And there’s a problem of what audience(s) to write for. And I care most about my own learning and keep improving the ideas (and my writing ability), anyway. David Deutsch explains some of this in his books and I very highly recommend them.

Finally, I agree with you a single word would be good. I use phrases like, “I have exactly one non-refuted idea about what to do (about X).” While this is long, I don’t expect to be understood if I say anything shorter (I don’t even expect to be understood when I say this). I value clarity over brevity. I could invent a new word, but that has various downsides (especially that most people reading something I say won’t have read and remembered my word.)

Depending on the situation, you can say shorter things like “I’ve decided.”

If you say “I’m certain” then most people will think you’re an inconsistent, evasive infallibilist like they are. And people will think you’re certain of the relevant abstract ideas, rather than merely certain about the time-and-resource-sensitive action-decision. E.g. say you’re launching a space shuttle to the moon. If you say you’re “certain” about the launch, and “certain” about the engines, people will think you’re saying you know for sure that the engines won’t fail, that there are no errors in the math and physics involved with getting the shuttle from here to the moon, and a lot more. But as far as the decision to launch goes, the actual idea to be evaluated is whether it’s a good idea to go ahead with the launch now or not. That involves tradeoffs like whether it’s worth more time and money to make it very slightly safer which are just the kind of thing people would be confused if you said you were “certain” about because they think it’s a matter of compromising and muddling through life. And anyway saying you’re “certain” doesn’t communicate to people that going ahead is your best idea, which you don’t know any criticism of or contextual flaw in, and is the only non-refuted idea you have about this exact problem/question, and so acting on your one and only non-refuted idea about what action to take in this case is the only rational thing to do (e.g. if you have a criticism of delaying more, like that it’d be too expensive, and no criticism of launching now, then it’d be irrational to delay more.)

---

http://curi.us/1595-rationally-resolving-conflicts-of-ideas

http://beginningofinfinity.com/books


curi at 9:15 AM on November 6, 2016 | #7417 | reply | quote

I do consider myself an Objectivist.

> Another option would be to consider abandoning Popper for Rand. In this case I would continue the discussion in order to discover precisely where the fundamental differences occur, and then return to the Objectivist canon to decide whether I might become sufficiently persuaded to begin thinking of myself as an Objectivist. (I do not detect such openness in Mr. Temple’s presentation, but motives and intentions are often unclear in online forums.)

I already think of myself as an Objectivist. I’ve avoided claiming it because I’m not looking to distract the discussion with debate about whether I’m an Objectivist instead of debate about the issues. I want to discuss stuff like epistemology, not who is a true Objectivist and who isn’t.

Now that you’ve brought it up, I think silence on my part would be reasonably interpreted as acceptance, so I need to contradict. So I’m speaking up. Judge me how you will, but I deny your claim about my own non-Objectivist view of myself.

Please expect to be ignored if you try to debate this with me.

(If you really think it’s important to discuss, I’ll be willing to debate it elsewhere at an unmoderated unlimited-volume unlimited-tangents fully-public forum (email me if interested). I don’t recommend it. But I make this offer because of my general policy of openness to criticism and discussion. At HBL I’m trying to focus on on-topic posts about interesting issues that readers would care about, but I am fully open to dealing with any criticism if you care enough to go to a more appropriate venue for the purpose of speaking to me personally.)


curi at 9:32 AM on November 6, 2016 | #7418 | reply | quote

Questions are wonderful.

Your judgement of what contradicts what, or what is unquestionable, should itself be open to question. So the questions never go away. At most you can prevent something from being questioned directly. If you only meant that some things aren’t questionable directly, that’d be a somewhat different matter, but I’m not guessing you meant that.

When you say some things can’t be rationally questioned, you’re doing something I have a concept for which isn’t well known. The issue is replacing some or all discussion with pre-discussion. Rather than discuss the issue, there is a pre-discussion about whether one is allowed to discuss the issue. E.g. one may discuss whether a question is rational before discussing the answer to the question. Then, only if I get anywhere in the pre-discussion, would I be able to get to the real discussion with you.

Pre-discussion is not always bad, but it often is. Like we could argue for years about whether something can be questioned, or not, even if I have no question or criticism about it, because I care about the principle. And if I do have a question, it’s often much faster to just answer the question. Pre-discussion often happens before the question (or some other main point one wants to discuss) is even stated and can end up being mostly irrelevant. E.g. I might have the question: “Isn’t 2 plus 3 actually 6? I typed it in my calculator and got 6.” Rather than saying “2 + 3 = 5, and that’s unquestionable” and having a pre-discussion about the rationality of questioning it, it’d be better to answer the question: “You probably accidentally hit the multiplication button instead of the addition button.”

There is no need to make anything unquestionable. There’s no actual benefit. The motivation comes from epistemology errors. When your epistemology doesn’t approach error in the right way, then you need bad ideas like self-evidene and unquestionability to try to make it work anyway.

If an idea is bad, refute it. If a question is misconceived, criticize it. If you’re worried about your time, write canonical versions of general arguments that cover many bad ideas. Then refer questioners to those.

And allowing things to be questioned helps stabilize the truth. Good ideas compare well to bad ideas. And you need the answers to bad ideas to prevent bad changes (as attempted improvements) to the prevailing idea. The criticisms of various bad ideas constrain what adjustments to the prevailing can be made.

“2 + 3 = 5” deals with integers, and integers are tricky (e.g. are there infinitely many integers or not?) so it really ought to be open to questioning (since one could be mistaken about the properties of integers and, if one is, then anything to do with integers could be revised.)

I don’t see any value in questioning 2+3=5 at this time. I have no grounds to doubt it specifically. (I just have doubts about HB’s non-infinite conception of integers and some other related but separate issues). But I do see value in the principle of keeping everything open to critical questioning. It’s important to answer all criticism. (Though not personally. It can be answered by anyone and written down in public. And the answers can deal with categories of ideas, not every criticism individually.) If no one wrote down why something was wrong, and then people said it was unquestionable, that’d be bad. And if there is an answer written down, saying you can’t ask the question is kinda silly and it’d be better to just refer people to the answer instead. When people actually ask bad questions, they have some misconception and could use help.

But, as is typical, I do have questions, right now, about some other things claimed to be unquestionable. HB said:

> I think that it is unquestionable that counting is a simple operation.

I actually disagree with that claim in the way I think he meant it. There’s a great deal of complexity involved when a human counts. I disagree with discounting all the complexity which is automatized, unconscious, or hardware level. I also think HB underestimates the complexity of how people think about math, including counting, and how riddled with misconceptions it is. I know a math tutor and oh my god are people confused about this stuff.

I also think that “operation” here refers to a physical process, and what physical processes we think are simple depends on our questionable ideas about the laws of physics.

> You can’t simultaneously uphold the omnipresent value of questions and allow for the possibility that questions don’t exist.

Sure I can. Critics are welcome to give some argument that questions don’t exist, and I’ll address it.

> Or, it would be wrong, irrational, context-evading, to try to question that Chicago exists. Especially now that the Cubs have won the World Series! Speaking of which, you can’t rationally question whether baseball is a sport, or whether more than 100 people watched the World Series.

If someone has a question about how many people watched the World Series, how to know, whether it’s more than 100, etc, that’s fine with me. I think a rational but ignorant person could ask lots of perfectly fine questions about it. They could ask whether and why watching it on TV counts as watching it. They could ask how to judge secondhand claims that lots of people were present in person. They could ask why to trust maps. They could ask this stuff in critical and questioning ways, e.g. “I grew up in Palestine and lots of the maps were lies. Now I live in the US and I don’t want to be tricked again, so why should I believe the maps your elites put out which include, for example, Chicago? I think maybe they’re trying to control information in order to influence us and control society.” That seems OK to me, and answerable. There are answers to those questions. They actually get into some important issues.

Some people ask questions about solipsism. And that’s fine too. Totally reasonable questions worth answering. That’s why David Deutsch did answer them in The Fabric of Reality. The questions about maybe I’m unconscious, maybe I’m dreaming, maybe Chicago doesn’t actually exist, etc, led to such interesting answers (that are useful and related to other philosophy) that they got substantial space and attention in DD’s book.


curi at 10:02 AM on November 6, 2016 | #7419 | reply | quote

Nate Silver is immoral.

Nate Silver is a former Daily Kos contributor (a super leftist evil site). He’s an extremely dishonest partisan leftist who has spent the election (including the primaries) lying about Trump’s chances in order to try to harm Trump politically. He’s been wrong over and over as a result.

Don’t trust anything from Nate Silver.

There’s serious methodological problems behind his work, too, if you want to go into fine details. This can be tricky because much of what he writes is missing various details, contrary to the reputation he’s cultivated. But if you pick something you personally checked, and think is correct and has adequate detail, I will either reply with criticism or some kind of concession. Put another way: people don’t realize how much of his model is him just making stuff up and then trying to make it look far more rigorous than it is.

I’ve attached an image giving a sense of Nate Silver:

http://curi.us/files/Nate-Silver-Headlines.jpeg


curi at 11:46 AM on November 6, 2016 | #7420 | reply | quote

Final election thoughts.

Please vote for Trump. 🇺🇸🗽🏙

People over-complicate elections. It’s simple.

Would you rather have anchor babies and amnesty, or build the wall and enforce the law?

Would you rather drain the swamp and have a 10% smaller government, or have an aggressively expanding government?

Would you rather have a corrupt government or lock her up?

That’s about all your vote means. Your vote doesn’t send a complex message. Your vote isn’t about nuances. It gets lumped in with every other voter for the same candidate in your state. The election lets us choose between basic, simple, opposing views on a few major issues. That’s it.

If you vote third party, you’re choosing not to participate in deciding these things.

Vote Trump if you want a government hiring freeze, a law that two regulations have to go away for each new one added, border security, and to use coal, oil and gas to power American industry.

Vote Hillary if you want to continue Obamacare, sending billions of dollars to Iran, and refusing to face the problem of radical Islamic terrorism (let alone solve it). And if you don’t mind corruption.

That’s all this election is about.

If you care about nuance and ideas, work on philosophical education separate from the election.

If you haven’t made up your mind yet, please read Trump’s Contract With The American Voter.

https://www.donaldjtrump.com/press-releases/donald-j.-trump-delivers-groundbreaking-contract-for-the-american-vote1


curi at 1:53 PM on November 6, 2016 | #7421 | reply | quote

Thanks Elliot

I'm so glad you are posting to HBL, specifically about epistemology. So much awesome content for us to read and learn about. Thanks.


Anonymous at 5:35 PM on November 6, 2016 | #7422 | reply | quote

@#7422 who are you? i don't know what to do with this with no idea what perspective you're coming from.


curi at 5:39 PM on November 6, 2016 | #7423 | reply | quote

Peano's axioms

>> According to Peano’s axioms, every natural number has a “successor,” that is, another natural number that is one bigger than it.

HB replied:

> To paraphrase Bill Clinton, it all depends on the meaning that “has” has.

But why should everything turn on that one word? When I wrote that every natural number has a successor, I was simply referring to the following axiom of Peano:

>> For every natural number n, S(n) is a natural number.

This axiom can be described in English in many different ways, so I think the issue is not with the word “has”, but with two contradictory ideas of what the natural numbers are. Namely, Peano’s axioms specify the natural numbers as an increasing sequence with no upper bound, while HB says that the natural numbers have an (indeterminate) upper bound.

Since HB contradicts Peano’s axioms, it is not clear to me in what sense he is “supportive” of them.


Josh Jordan at 9:26 PM on November 6, 2016 | #7425 | reply | quote

I guess he's supportive of them as a good way to deal with small numbers.


Anonymous at 9:31 PM on November 6, 2016 | #7426 | reply | quote

Interpreting Peano's axioms

One of Peano’s axioms states:

> For every natural number n, S(n) is a natural number.

HB writes:

> S(n) is the “successor” of n: the next number in the series. If 5 is a number and 6 is the next symbol added to the series, than 6 is a number.

>

> Yes, that’s exactly what I’m saying.

>

> So, there are two ways of interpreting and using that axiom:

>

> 1. There are an infinite series of numbers, within which each later number is one more than the previous number, going back to the base.

As you know, none of Peano’s axioms mention infinity, and the axiom we are talking about simply says that if n is a natural number, then n+1 is also a natural number. So the correctness of interpretation (1) depends on exactly what you have in mind by an “infinite series”.

> 2. Numbers are part of an ordered system of symbols, each of which gets its place by the symbol before it, and its meaning is to be 1 more than that preceding number.

>

> I, of course, am in agreement with 2 not 1.

Interpretation (2) seems to be saying that if n is a natural number, then n-1 is a natural number. If so, isn’t that rather different from Peano’s axiom, which goes the other way (from n to n+1)?

> The phrase “For every number” doesn’t say “and there are an infinite number of them.” It says, as I read it, for every number that has been devised, the next number to be devised will be one more than it.

This is not an accurate interpretation of Peano’s axiom, because it refers to two different things: (1) natural numbers that have been devised and (2) natural numbers that can or will be devised. Peano’s axiom, on the other hand, deals only with a single kind of thing: the natural numbers. Peano’s axioms make no reference to time or anything else.

> Back in 30,000 B.C. when, let us assume, no sentient being had devised a number greater than 7, did the number 257,532 exist? Not the quantity, but the number that denotes that quantity?

I will stipulate that in 30,000 B.C., the number 257,532 existed, but the symbol to represent it did not.

> Here’s an analogy, every concept, like every number, is a kind of “successor” to some previous concept(s), going back to first-level concepts formed directly from perception. For instance, the concept “subroutine” is a “successor” built on earlier concepts such as “program,” “computer,” and “execution.” But in 30,000 B.C., the concept “subroutine” did not exist. And there aren’t an infinite number of concepts, even though we could make a perfectly true Peano-like statement about how later concepts build on earlier ones.

One could certainly develop a way of thinking about numbers along these lines, but the result would be radically different from the natural numbers defined by Peano’s axioms. I think it would be helpful to reach agreement on this point before proceeding.


Josh Jordan at 10:50 PM on November 6, 2016 | #7427 | reply | quote

Peano's axioms

> I guess he's supportive of them as a good way to deal with small numbers.

If he says that, it would be progress. So far his position seems to be that the Peano axioms are consistent with his idea of the natural numbers.


Josh Jordan at 12:12 AM on November 7, 2016 | #7428 | reply | quote

some of them really hate Trump :(

One-line summary: Vet immigrants and let in the moral heroes immigrants who want to produce, but not the other ones…

Have you guys looked at actual statistics about immigrants going on welfare, committing crime, etc? I have. “In my experience” means anecdotes. The data on this stuff is better than anecdotes.

US immigration policy is broken. We bring in third world people over first world people, and with them come honor killings and other primitive barbarism. And immigrants go on welfare at high rates. How does that benefit us? The left is using immigration policy to try to change the country’s demographics and culture, and get block voters.

Don’t you think we need a border, and for people to enter the country legally, in order to e.g. prevent deported criminals from returning?

In an ideal world we’d have more or less free immigration, along with free trade. But we have a welfare state and a relativist culture that isn’t assimilating newcomers well. We have to look at what’s actually going on today (with data, not anecdotes). Can we at least shut down immigration from ISIS-controlled territory!? But Clinton wants to spend billions of taxpayer dollars to bring in more Syrians.

Anchor baby laws are uncommon in the rest of the world. They make no sense. Why should someone illegally sneaking into your country for 2 days be rewarded with citizenship for their newborn child!?


curi at 9:22 AM on November 7, 2016 | #7430 | reply | quote

re-asking some of the basic questions about support and evidence

> Identify your context of knowledge, contra Hume. Within that, only rational hypotheses, i.e., those which have some evidence. This excludes a green cheese moon and individualism as the politics of the next president. Then hierarchically order them for probability, again based on evidence. If this provides many equally likely hypotheses [do work to figure it out.]

The issue isn't whether to do thinking work. The issue is what that work consists of. How do you do the work? What steps are involved?

Which hypotheses have some evidence? How do you decide that? What I see is there are some hypotheses that are contradicted by evidence, and some that aren't. I deny there's any way to judge the non-refuted hypotheses with the evidence. You have to judge them with critical thinking. But you say to judge them in terms of evidence. How? What relationship between idea and evidence are you talking about? You think support is a relationship between idea and evidence, but I think it's a myth, so I challenge you to give the details of it.

How are probabilities determined from evidence? What are the steps to be done?


curi at 1:23 PM on November 7, 2016 | #7431 | reply | quote

point by point reply to HB

> Is this serious?

Yes.

> As stated, it is wild primacy of consciousness.

How so? There are many different possible designs for eyes, and we have a particular one with various strengths (can see green) and weaknesses (can't see ultraviolet). This isn't a claim about consciousness.

> Concepts are formed from perception.

That's your claim, but not mine. I claim they are formed a different way.

> If it is impossible for a three-month old to have “raw observation,” then, he would be unable to form concepts.

On your premise that concepts are formed from raw perception. But I deny that premise, so I don't run into this problem.

> We don’t perceive everything, but we perceive enough.

I agree.

What do you think the difference between raw and non-raw observations are?

> And this whole routine about the myriad possibilities that supposedly overwhelm us is highly artificial. It’s an importation of an adult perspective into the world of the child.

I didn't say they overwhelm us.

I didn't import an adult perspective. I'm not talking about a particular person's perspective on the world. I'm talking about logical analysis of data sets and the properties of data sets.

I'm saying there are logical issues with certain philosophical claims about how thinking works.

> (I’m reminded of Quine’s gavagai “problem,” if you are familiar with that.)

Do you have a refutation of it? I took a look at it and I thought the basic point is correct (that any finite data set is compatible with infinitely many patterns or interpretations.)

This is one of the major logical issues I've been talking about. I think it refutes some claims about epistemology. One needs an epistemology that doesn't run into this problem. I have that. You don't.

> Even for adults, it’s converting a practical problem of what’s the most productive feature to consider into a philosophical problem, which doesn’t exist:

You're not thinking about the problem the way I am. It's a logical-philosophical problem about what sorts of epistemologies can and can't work. The epistemologies which say to use inductive inference to generalize a finite data set to a correct causal or explanatory theory don't work. That can't be done. Getting correct causal and explanatory theories requires other methods that aren't focused on trying to somehow infer the answer from the data.

> Tables have a zillion characteristics, so how can I ever decide which to attend to in order to form “table.” The child forming “table” confronts no such problem.

I agree it's *not* a practical problem. Children, and adults, can think and learn. As a practical matter, people figure out tables.

The interesting problem is the philosophical problem of *how* people think. I claim it's done in a different way than you claim. And I claim your approach has logical problems like the gavagai problem. BTW, I'm unaware of any refutation of my approach that you have (some reason you know of what evolutionary epistemology couldn't work.)

How does one deal with these infinities? (Infinite patterns compatible with a finite data set. Infinite characteristics of an object such as a table or dog. Infinite similarities between a dog and a rock. Infinite differences between a dog and another dog.) Children can do it, but that doesn't make it any less of an important epistemological problem to understand what the right methods are and what methods can't work.


curi at 1:48 PM on November 7, 2016 | #7432 | reply | quote

I don’t think causality is observable.

> For example, if we could perceive nothing but appearances, we could not perceive any actual instances of causality.

Why and how do you think causality is observable?

Causality isn't a physical object. Photons don't bounce off causality. You can't touch it. It doesn't make sound waves.

I think causality is figured out with critical thinking. But here we're talking about perception! How would observing causality work?


curi at 2:12 PM on November 7, 2016 | #7433 | reply | quote

Appearances and the external world.

I'll reply about chapter 2 later with some criticism.

> How do you know that, on your theory, we know the real world?

David Deutsch refutes solipsism in The Fabric of Reality. It's hard to summarize because it comes up in the book repeatedly and is dealt with by having the right philosophical system, rather than with a single clever argument. Anyone who really wants to understand Critical Rationalism should read DD's two books, and discuss them to error-correct one's understanding of the books, and then after that ask me which specific Popper chapters are important to read.

I'll talk about one aspect of my perspective that I think will be clarifying.

I have no criticism of the world being real and me having knowledge of it.

I do have criticism of the world not being real.

I look at this in terms of critical arguments. What problems, if any, does saying the world is real lead to? What does that claim contradict that I think is true, if anything? Is there any evidence which contradicts the claim the world is real? Is there any evidence which contradicts the claim that I know things about the real world, e.g. that I have a room and it has a door?

Saying the world isn't real leads to problems. Why does HB act like a real person if he's not real? Why does he do complex behavior? Why does he have his own ideas? Where do his original ideas come from if not from him being a real person who thinks of them?

Saying I don't know about the real world leads to problems. What do my eyes provide information about, if not the real world? Where does that information come from? There are answers to these first level arguments that people can give, but then, depending what they say, I have further critical questions, arguments, etc that I already know.

> As has been pointed out by philosophers throughout history, if we see only internal objects (appearances)

I don't think "internal objects" and "appearances" are synonyms. I think one sees the way external objects appear. For example, a dog appears a different color depending on the lighting. You see this appearance (red-tinted dog in red light), not some true Platonic form of the dog.

And when you see what appears to be a dog, it could be a a very realistic robot or sculpture. Nothing about perception lets you automatically know whether it's a dog or a sculpture. You see how it appears and then have to figure out (with evolutionary epistemology) what real thing is causing that visual appearance.

This seems like at least partly a language issue. I think one sees the appearance of the external world (and the information you get from the senses is modified in important and non-ruinous ways before it reaches the mind).


curi at 2:34 PM on November 7, 2016 | #7434 | reply | quote

I can’t tell what you mean by “certainty”.

> Given Kant’s claim that we can’t know anything with certainty, Popper’s response is to say that we can have knowledge, reason, science, objectivity, and political freedom without any need for certainty. Popper represents a resignation to Kant.

What do you mean by "certainty"?

I say: We can't have infallible knowledge. We can have knowledge. We can make decisions. We can act on our knowledge. What more do you want?

_Atlas Shrugged_:

> Why should we permit them to blast the ground from under our feet every few steps? Why should we be kept on the go in eternal uncertainty? Just because of a few restless, ambitious adventurers? Should we sacrifice the contentment of the whole of mankind to the greed of a few non-conformists?

I'm an ambitious adventurer checking premises and questioning things people thought were certain. I don't allow the safety of thinking an issue is permanently settled with no risk of someone questioning it ever again. You may have to think and use judgement again. So what?


curi at 2:50 PM on November 7, 2016 | #7435 | reply | quote

HB detail clarifying

HB:

> there comes a point at which gaining more information cannot be expected to change one’s action-decision.

ET:

> Further information always may change one’s action-decision.

HB:

> But what I said was “cannot be expected to” and his response is “always may.” That’s the difference, right there. I’m saying that at a certain point, it becomes irrational to have a certain expectation.

I think in most cases, if you have more information (with no limit being specified on how much or how long it takes) then you could make a better action-decision.

I don't know if you deny this or you think common sense indicates you meant only a reasonable amount of additional information. I don't know if this is an issue of you thinking lots of your decisions are perfect, or if it's an issue of my literalism, or something else.

> If this is right, what is the point of saying that there is no contradiction in the unexpected happening? There isn’t any. Certainty **is** the state of being “beyond a reasonable doubt.” Only unreasonable doubts remain.

Some of this hinges on what you mean by "reasonable" and "unreasonable", which I don't precisely know.

I say people should always act on *flawless* ideas (in their current understanding). That's better than merely lacking reasonable doubts. It's stronger. It's a higher standard. And I claim it's achievable in every case (and without taking too long). I've talked about this but don't think it got replied to.


curi at 3:18 PM on November 7, 2016 | #7436 | reply | quote

HB Closes Epistemology Discussion

HB posted the following reply and closed the threads discussing epistemology and perception. Long story short, they don't want to check their premises.

---

One-line summary: We’ve reached the end

The end had to come, and I think it now has come. I’m not saying that the end had to come for philosophical discussion of this topic in general, but that it had to come for HBL as a business venture. My customers, rightly in my view, are starting to complain.

We “get it.” And I doubt anyone here has been persuaded of your/Deutsch’s/Popper’s ideas. If someone has been, I think they’ve made a serious mistake.

Let me review some of the more clearly wrong and anti-Objectivist ideas that I and most others disagree with that you have put forward

- There are no self-evident foundations for knowledge.

- Perception is of appearances not of objects in reality.

- Concepts are not abstracted from percepts.

- Induction is not valid.

- The continuum of possible-likely-certain is not valid.

- Knowledge comes from conjecture and refutation.

- “A is A” is subject to the (metaphysical) possibility of error.

- “2 + 3 = 5” could be wrong.

- Thinking is a physical process.

- Eyes have opinions.

I’m sure I’ve left out some additional points, but the above is more than enough. So, I now rule this whole discussion, not just the existing threads, to be closed.

P.S. Yes, I do have an answer to Quine’s gavagai nonsense. The answer in essence is: hierarchy. Concepts of entities are hierarchically prior to concepts of existents in other categories. Also crucial is the fact that concepts are based on differentiation as well as integration (so appeal to what isn’t “gavagai disambiguates”). But I won’t argue it here.


Anonymous at 3:50 PM on November 7, 2016 | #7437 | reply | quote

The converse of Peano’s axiom

> In the form that I endorse [Peano’s axiom], it says that every thing which counts as a (natural) number has its place in the series, and got that place by being positioned after whatever was, up to that time, the last symbol in the series.

I don’t think there is a form of Peano’s axiom that says this. Peano’s axiom says that if n is a natural number, then n+1 is also a natural number. But the text quoted above seems to say something like the converse of that, namely: if n+1 is a natural number, then so is n. And as everyone here knows, swapping the antecedent and the conclusion of an implication, is not, in general, a truth-preserving transformation.


Josh Jordan at 4:36 PM on November 7, 2016 | #7438 | reply | quote

Context of Peano's axioms

Context of Peano’s axioms

> [Peano’s] axiom says, or seems to say, that you can always add another number.

> I don’t know what Peano himself thought, but I think the axiom assumes the context of dealable-with symbols used for real-world purpose.

I disagree. If you know something about an upper bound on the natural numbers, then you should say so in the axioms.

> It’s not cricket to lay upon that axiom the task of dealing with the attempt to extend measurement operations to supposed cases where there is nothing left to measure and no human means of carrying it out.

What’s not cricket is to endorse a formalization of your knowledge which implies things you consider false! If a false statement can then be derived by repeatedly applying the rules of your system a non-infinite number of times, then then there is either a mistake in your original knowledge, or in the formalization.

Simply stated, Peano’s axioms imply the existence of arbitrarily large natural numbers, while Harry Binswanger’s view is that such numbers do not exist. Therefore, Peano’s formalization does not accurately represent Harry Binswanger’s conception of the natural numbers.

Has anyone formalized Harry Binswanger’s natural numbers in terms of logical axioms and rules of inference? I would like to see the result.


Josh Jordan at 5:03 PM on November 7, 2016 | #7439 | reply | quote

Philosophy

> P.S. Yes, I do have an answer to Quine’s gavagai nonsense. The answer in essence is: hierarchy. Concepts of entities are hierarchically prior to concepts of existents in other categories. Also crucial is the fact that concepts are based on differentiation as well as integration (so appeal to what isn’t “gavagai disambiguates”). But I won’t argue it here.

The above is the kind of bullshit that reminds me why I used to hate philosophy.

Compare with curi's summary of the Quine's gavagai thing in the first place:

> I took a look at it and I thought the basic point is correct (that any finite data set is compatible with infinitely many patterns or interpretations.)

I can understand what curi is saying, and that curi's not giving an argument why it is true, just summarizing and stating agreement. That's clear and good.

I think I can easily think of examples, just from the little bit curi wrote. One would be: if you take a 3-year history of daily closing AAPL prices, the Quine's gavagai thing would claim there are infinitely many possible patterns or interpretations of those prices. I may agree or disagree or have no opinion. I'm actually inclined to agree, but haven't thought about it much. But I think I understand the claim. And I'm sure curi will correct if I actually don't.

When I read HB's statement, it seems like he's giving an argument against the gavagai thing. But it's an argument that I can't understand and have difficulty even parsing. But then HB says he's not giving an argument. So wtf is he even doing?

I have no idea how I would apply HB's statement to my example of the set of 3-years of daily closing AAPL prices. Or any other example of a finite set that comes to mind.

That was what a large part of what my college philosophy course was like. And why I (unfortunately) decided I hated philosophy and (fortunately) deliberately tried to learn only enough, by rote, to pass the tests and then promptly forget it.


PAS at 5:26 PM on November 7, 2016 | #7440 | reply | quote

Did numbers exist before there was life?

> Go back before animal life evolved and take the number 2. Did 2 exist then? Could we say it existed in the relationship of all the pairs to each other? E.g., in the relationship of two leaves of a tree to the sun and the moon? I don’t think so.

If so, then to what extent is it legitimate to use numbers, today, to understand things that happened before there was life? For instance, is it legitimate to use numbers and physics to understand the motion of the planets 4 billion years ago?

Also, did photons exist before people discovered them?


Josh Jordan at 5:28 PM on November 7, 2016 | #7441 | reply | quote

how to get infinite patterns from a finite data set

@#7440

HB gave a brief incomplete indication of his argument.

I can more or less follow it. I think it's wrong. I don't think him writing it more fully would make much difference for a discussion with me.

I totally sympathize with someone who can't follow wtf he's talking about from that statement.

You got the general idea of my side of it right.

The argument for the infinite patterns is pretty simple. Here you go:

Consider: 1,2,3.

What patterns start that way?

Here's two of the many infinite sequences of patterns that begin that way.

First, counting groups of 3 with a gap of N, as N goes from 0 to infinity.

So 1,2,3,4,5,6,7... and 1,2,3,5,6,7,9,10,11... and 1,2,3,6,7,8,11,12,13... etc

Second, 1,2,3,N repeating.

So 1,2,3,1,1,2,3,1... and 1,2,3,2,1,2,3,2... etc

If you won't accept any sequence at all as a pattern (why not, exactly?) you can take any finite sequence and repeat it infinitely many times to make it clearly be a regular old repeating pattern.

So there's two separate ways to get infinitely many patterns starting with "1,2,3". Similar methods can be used with any other data set. It's easy.


curi at 5:35 PM on November 7, 2016 | #7443 | reply | quote

HB writes but has banned replying on HBL:

> So the positive support in this case is of the form: in all the cases so far considered, when the hypothetical cause is present, the effect is present. The corollary is: in at least some cases, when the hypothetical cause is absent, the effect is absent.

but with that formulation, there are infinitely many ideas that are supported (and infinitely many will get the maximum amount of support).

they're naive and their whole epistemology is based on selective attention.

but also, forget the infinite possibilities. every **interesting** rival will meet the same criteria to be supported. so how will this concept of support help you choose between important rival ideas? (it won't.)


Anonymous at 5:54 PM on November 7, 2016 | #7447 | reply | quote

HB posted a last minute plea for everyone to vote for Hillary. What a fool.


Anonymous at 6:17 PM on November 7, 2016 | #7454 | reply | quote

Dividing the distance across a room

> But you could say, “In going across the room, you cross a continuous distance that can be divided as many finite times as you wish.”

Can the distance be divided 10^100^100 times?


Josh Jordan at 7:02 PM on November 7, 2016 | #7466 | reply | quote

What specifically is the mistake in thinking 10^100^100 is a natural number?

HB:

> For instance, it is widely believed that there’s a number like: 10^100^100. There isn’t.

and

> I’m supportive of the original, 1-based Peano axiomatization.

I think 10^100^100 is a natural number according to Peano axiomatization.

Can you be very specific about what you deny?

Do you deny that x^y is a natural number if x and y are natural numbers?

Do you deny that x*y is a natural number if x and y are natural numbers?

Do you deny that x+y is a natural number if x and y are natural numbers?

Do you deny that 100^100 is a natural number?

Do you deny that 10^x is a natural number if x is a natural number?

Do you deny that exponentiation is repeated multiplication?

Do you deny that multiplication is repeated addition?

Do you deny that addition is repeated use of successor? E.g. x+3 = S(S(S(x))).

Do you deny exponentiation reduces to successoring via multiplication and addition?

Do you deny that all natural numbers have a successor which is also a natural number? Do you deny one of the basic Peano axioms?

Do you have a problem with a standard definition of addition, multiplication or exponentiation used in the Peano approach? For example, addition can be defined like this:

a + 0 = a

a + S(b) = S(a + b)

Do you see a mistake there?

Where exactly is the problem?


curi at 8:40 PM on November 7, 2016 | #7468 | reply | quote

bad puzzle

> ABCDE * 4 = EDCBA

the letters are individual and different digits.

this puzzle was posted on HBL.

i solved it easily, but there was supposedly a clever way to solve it with a point to it relevant to the discussion.

the intended solution has now been posted. it's less clever than what i did. it's just painstakingly figuring out digits one-by-one manually. i knew how to do that but i didn't want to because it's boring and pointless.

my take away: it sucks when people post puzzles that have no good answer while claiming there is a good answer. then you look for the answer and want to know what it is, but there's nothing there.

i think they just have low standards and are impressed but stuff i find too easy to consider clever.

Here's my solution in ruby which is a way better way to do it:

10000.upto(25000) do |n|

m = n * 4

puts n if n.to_s.reverse == m.to_s

end

Their way is a lot more human labor for no reason and comes out much less elegant than the code.


curi at 8:56 PM on November 7, 2016 | #7469 | reply | quote

bad puzzle

i get it now.

they think it's impressive you can solve the problem by hand AT ALL

whereas i looked at it and initially evaluated it as not that hard

so i'm not impressed by it being soluble by hand AT ALL

they look at it and think omg the search space is 15k numbers, this is probably impossible by hand

and then the solution seems clever and impressive

but i didn't have that initial impression. i thot it's not very hard, so only an elegant clever solution would impress me.

they have a bad sense of how hard different problems are, and how hard to deal with big numbers and sets are.

they should learn about Feynman. he could do way harder math problems by hand, and fast! there's tricks for so many things. they don't have a good sense of numbers and math and what people can accomplish. Feynman was good at it. i'm ok at it (i haven't studied/practiced much).


curi at 9:37 PM on November 7, 2016 | #7470 | reply | quote

more on the puzzle

HB basically thinks noticing the solution space is 10k to 25k (like you see in my ruby code) is clever, whereas I thought that was trivial. He thought there was only one way in by looking in the right place because, I guess, he's not good at finding other ways in. I replied saying you can approach the problem a variety of ways. Basically it's like I thought, they're just not that good at this stuff and get impressed by things that don't impress me, and get stuck on things that aren't very hard to make progress on. I wonder if they'll appreciate these other ways to start instead of just being stuck in an all-or-nothing finding the one trick (that the answer is only 5 digits, not 6) or not.

One-line summary: There’s multiple ways to start.

I don’t agree about just one way in (nor did I think it was tricky to check the solution space at the start and note the highest EDBCA can be is 98765 and the highest ABCDE can be is 98765/4). Alternatively, you can note:

A = 4E mod 10

Then you can see A is even and if A is 0 then E has to be 5 (can’t also be 0) and then, even if you allow leading 0s, you can’t carry enough (the highest carry is 3) for it to work.

Formalizing the carry issue, you know (4A mod 10) + (0 to 3) = E.

So you can’t use 0 and 5 for A and E. So you have possible A,E pairs from the earlier formula, starting with E=1:

4,1; 8,2; 2,3 etc

And you can see that 16 + 0 to 3, under mod 10, can’t be 1. So that’s out. And 8,2 is ok. But 2,3 is no good because 8 + 0 to 3, under mod 10, can’t be 3. So we’re not having any trouble getting started constraining which letters can be which numbers even if we somehow miss that we can’t allow a carry that takes us to 100k+.

There’s other stuff you can note from the start, e.g.:

4DE mod 100 = BA (DE here really means 10D + E, and BA is 10B +A.)

Or:

(4C + (0 to 3)) mod 10 = C

Because the most carry you can have is 3 from 98*4 = 392.

Then you see C has to be 0, 3, 6 or 9. And if C is 0, then DE has to be under 25 to avoid a carry (and over 10 because 0 is already taken for C). The other numbers also require specific carries that constrain what DE can be. Like if C is 3 you need a carry of 1, so DE has to be from 25 to 49 (meaning D would be 2, 3 or 4. But not 3 because C is 3 in this case, so only 2 or 4.)


curi at 10:38 PM on November 7, 2016 | #7471 | reply | quote

Finishing an alternative puzzle solution.

I decided to finish solving this by alternative methods. We had worked out 4 options for C. To save time I just start with the guess that C is 9 (the right one of the 4) and see where it leads:

If C is 9 then DE is 75-87

And the A,E pairs that work are:

8,2

6,4

4,6

2,8

I didn’t check E=9 since we set C to 9. I did check and omit E being 7. (We had ruled out 0,1,3,5 earlier.)

This makes the possible DE numbers:

76 78 84 86

It can’t be 82 because then you have both A and D being 8. Similarly I ignored 88. And we ignore 89 and the 90s because C is 9.

So the 4 possibilities we have are:

4B976

2B978

6B984

4B986

You get this by e.g. writing 976 and then noting with E=6 then A is 4 from the pairs above.

You can then match them to the reverses and see how it’s looking:

4B976 * 4 = 679B4

2B978 * 4 = 879B2

6B984 * 4 = 489B6

4B986 * 4 = 689B4

You can solve all of these for B easily. In the first one, we multiply 76 by 4 and get 304 so the last 2 digits of the answer are 04 so B is 0. So we have, as a possibility, that 40976 * 4 = 67904. But actually 40976 * 4 = 163904 so even if we trim it to the last 5 digits, and still don’t realize an extra digit on the answer is invalid, it still doesn’t work. (The reason we can get a busted answer is we were just using a limited set of constraints to narrow things down, not a fully accurate set of constraints to prevent all wrong answers.)

Next we have 78*4 gives a B of 1. And 21978 * 4 = 87912 works. Done.

It wouldn’t have been all that hard to have gone through all the 4 guesses for C in a similar way. I could also keep going through possibilities if I wanted to check if there’s more than one solution.

This isn’t that much more tedious than the intended answer. Anyway the point is even if you completely miss the supposed one way to approach it, you can still do this by hand pretty fast.


curi at 11:05 PM on November 7, 2016 | #7472 | reply | quote

Screencast of my HBL thinking and writing process

One-line summary: Video of my thinking and writing process.

I recorded a screencast while writing my last 6 replies on HBL about epistemology. It’s now up on YouTube.

Link: Video: HBL Thinking and Writing Process

Watch to see me think out loud about HBL posts. See how I approach the topics, how I organize my thoughts, and how I write.

Talking allows me to provide different information about where I’m coming from than text does.

I’d appreciate comments, including criticism, on my method. You can see my process instead of just the final product.

People tell me I’m mistaken about epistemology. Presumably there’s something wrong with my approach behind those mistakes. Please tell me if anyone can point out something I’m doing wrong.

The video will help people understand what I mean better and how I’m approaching HBL discussion. I hope the extra perspective on my views will clear up some misunderstandings.

I like seeing other people’s processes when I can. I can learn from how they do things, and it’s uncommon to get to see behind the scenes. Perhaps you could pick up a few tips and tricks from me, too.


curi at 9:13 AM on November 8, 2016 | #7473 | reply | quote

TRUMPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPP!!!!!!! (this is an HBL post)

Today is a great day.

God bless America.

Ayn Rand would have voted for Trump, too.

#TrumpTrain

#DrainTheSwamp

#MakeAmericaGreatAgain


curi at 8:09 AM on November 9, 2016 | #7489 | reply | quote

Trump isn’t very religious and the left has gotten worse.

I said Ayn Rand would have supported Trump. HB replied:

> [Ayn Rand] opposed even Reagan. It is not possible to say what she would have thought, but my best guess is that she would have been violently opposed to Trump.

Reagan was religious and that was one of her major complaints about him. Trump basically isn’t religious. And Trump likes abortion and won’t be going after it (he even defended Planned Parenthood in the GOP primary debates.)

And the American left were less evil back when Rand was alive. I think HB underestimates how bad the Democrats are today, by a lot. But I think Rand wouldn’t have misjudged them and would have seen the need to defeat them.

Caveat 1: Not all Democrats are so bad. Some even voted for Trump. Hillary and Obama are representatives of an especially bad faction which is in power on the left. It’s the faction with e.g. Saul Alinsky and George Soros.

Caveat 2: Communists in America were very bad and were a bigger problem in the past than today. However, communists had less influence over the Democrats. Obama and Hillary are openly at the very top of the Democratic party. This isn’t just about the worst extremists on the left, it’s about who’s in power on the left.

For information on Soros, read: The Shadow Party: How George Soros, Hillary Clinton, and Sixties Radicals Seized Control of the Democratic Party

For Alinsky, see the free pamphlet: Barack Obama’s Rules For Revolution: The Alinsky Model

These are very important and everyone should read them. Most people know very little about Soros and Alinsky, even though they are major players in the fate of our country.

It’s very easy to misunderstand today’s left if you rely on mainstream media sources like the New York Times or Fox News. Thanks to the internet, tons of people now know the the mainstream media is dishonest and you can’t trust everything they say. But people still underestimate how slanted the coverage is, and how hard it is to get an objective understanding of what’s going on without following a lot of information from elsewhere.

In my understanding, HB thinks Hillary and Trump are comparably bad. Even if all HB’s accusations against Trump are true, that’s still far too kind to Hillary. I’m guessing the media coverage plays a large role in this and HB hasn’t read all the facts about her that I have.

Here is one little piece of the evil that is Hillary. There’s so much more, but you have to know where to look, and spend a lot of time on it, and stop having any respect for the mainstream media and other conventional societal elites. But HB is immersed in a social circle that recommends Nate Silver, and he’s commented on the New York Times as if they were reputable, rather than to trash the hell out of them. The view from over there is too distorted to understand Trump, Obama or Hillary. There’s so many other things one really needs to know to get this right. I’ll just close with this: The Black Book of the American Left.


curi at 7:38 PM on November 9, 2016 | #7510 | reply | quote

reply to HB @ politics

One-line summary: What would change your mind?

Suppose hypothetically that I’m right about everything I said in this thread. That I wasn’t being hasty and I correctly identified mistakes you’re making. What would it take for you to reconsider and start reading about the matter?

I could tell you that David Horowitz, one of the best political writers today, was involved with the black panthers, and has written about their murders, and is a former member of the left and studies the left, and says the left is much worse today (and has written extensively explaining why). There are many other things I could bring up, like talking about what books by people in Trump’s circle of influence I’ve read and asking which you’ve read. Or I could re-ask about what you know about Alinsky and Soros. Or I could point out what you said about the NYT and CBS recently, and how you’re being naive about Nate Silver’s past results and his affiliations. What do you think might be productive?

One of the things that would get me to reconsider is if you knew of some major, valuable sources of information on these matter that I’ve neglected. Some important stuff you’ve read that I’m unfamiliar with and don’t have major criticisms of. I don’t think you have the same attitude because when I brought some information sources up you didn’t say you were already familiar, nor give a criticism, nor take an interest in them. I’m guessing you don’t know how all the sources of information I brought up are connected to Trump’s campaign and people he gets advice from, nor did you read them and see how much they have to do with Hillary and Obama. Nor did you contradict me and give any indication you already know the information I’m talking about.


curi at 8:54 PM on November 10, 2016 | #7536 | reply | quote

Binswanger, part 2

"But HB is immersed in a social circle that recommends Nate Silver, and he’s commented on the New York Times as if they were reputable, rather than to trash the hell out of them"

This is an excellent point. The only reason I can think of that HB thinks immigrants are low crime is that he gets his information from the New York Times.


Neil P at 6:36 AM on November 11, 2016 | #7543 | reply | quote

Hillary and Obama are worse than you know.

> I am not immersed in any social circle. I have my wife and three close friends with whom I communicate by phone or email irregularly.

HBL is a social circle. If you read some particular sources regularly, and not others, that’s also relevant. That’s getting information from particular people in your circle of people you’re influenced by.

Social circle does not mean “social metaphysics,” I was talking about information sources and cultural exposure.

> Your guess on that is correct. I didn’t contradict you because I wasn’t addressing the issue of how much I know about this kind of current commentary. I responded on the basis of what I witnessed in the 60s, which I listed, and what I (think I) know about Hillary and Obama, based on their public record.

If you don’t know current commentary, how can you compare past and present? If you haven’t read about topics like Alinsky and Soros (which aren’t exactly new), you can’t reasonably hope to understand Hillary, Obama or today’s left, and therefore you can’t do a comparison.

What you refer to as the “public record” of Hillary and Obama has been extremely distorted by the media. You know the media is slanted, but it’s still quite hard to get the correct picture. The picture you think you know about them is wrong, and you haven’t been involved enough in other social circles where you’d find the true information. (I don’t care about the terminology. We can call it groups of people sharing political information, like the Freedom Center group, if you prefer.)

> I would be interested to know why he holds, if he does, that today’s Left is worse than the 60s irrationalist, murderous Left.

When I said he does, I wasn’t joking or making things up. He has said so. In July he was asked:

@horowitz39 I’ve been thinking a lot about you lately. (we read ur autobiography & my Mom read it also). do U feel like ur in a time warp?

He replied:

No. This is much worse than the 60s.

Since you don’t seem familiar with Horowitz’s positions, here’s a tiny taste:

He says Obama’s approach to Iran is worse than Munich. And here’s another one saying, “Obama-Kerry: worse appeasers of Hitlerites than Chamberlain.”

And with that in mind, here is Horowitz saying Hillary is “far worse than Obama.”

Here is, “Obama could have freed the hostages as part of the Iran deal. He’s financing Iran because he hates us.”

And, “One of the fruits of the Obama-Clinton betrayal of Iraq & support for the Muslim Brotherhood and al-Qaeda in Syria.”

And, “Actually Obama’s policies ARE criminal, and treasonous. From day one his policies have aided & abetted our enemies.”

And, “Obama is an anti-American leftist first, last, and always, and Trump – like him or not – is the antidote.”

Here’s a sense of what he thinks of Hillary, “One thing that came through in the debate was what a nasty piece of work Hillary is, perhaps the nastiest in American political history,”

Here he comments on people like HB who vote for Hillary: “A vote 4 Hillary=a vote 2 put the highest level Muslim Brotherhood operative back in the WH. @Jkirchick @RonRadosh“

And, “Electing Hillary (or any Democrat) is signing a death warrant for thousands of African Americans”

And, “While you rage, a Hillary presidency will mean hundreds of thousands of unnecessary deaths both at home & abroad.”

And, “It’s not about personalities. With Hillary you get BLM & the Muslim Brotherhood, the commie social justice wanks.”

And, “Ask yourself: if Hillary gets elected because of Republican renegades, how [many] are going to die because of that? Last time it was 500k.”

HB voted for Hillary (or at least encouraged people to) without adequately familiarizing himself with the positions of David Horowitz and other Freedom Center writers and refuting their positions. Or am I wrong?

Back to HB:

> I would be very interested in reading the best one article from David Horowitz that you can recommend on the subject of how today’s Left is worse. Based on that, I would decide whether it’s worth reading more.

I don’t know of a single article explaining this particular issue. He has written stuff like, How Obama Betrayed America, but that doesn’t compare to the 60s.

What else would you take? Why not learn about Alinsky and Soros in order to learn what the left is like today so you can make the comparison yourself? Why not learn it all, or at least to the point where you can write a refutation (should you end up disagreeing)? Or if you don’t want to put in the effort a large topic like this takes, because you’d rather do something else, that would be fine, but then why not stay out of politics?


curi at 2:50 PM on November 11, 2016 | #7549 | reply | quote

Childhood is worse than blood on your hands.

In short, I think Roark is Roark before the book starts. And the same with Kira and Dagny. I don’t think this is a bad thing, but it doesn’t show people the process of becoming like them. I think there’s also a perspective differences on how learning happens which is involved here, but I’ll drop it.

I agree that Andrei having blood on his hands is plenty of reason for his suicide. It’s plenty realistic and normal and reasonable. However, when he finds out Kira has been deceiving him, he has plenty of normal and realistic reason to yell at her and be angry and stuff. But he doesn’t. He does something better, and I appreciated that.

I think people can come back from anything. Yes it’s very hard.

I actually think that most people’s childhood is a lot harder to come back from than being a murderer or killing a bunch of people while fighting on the wrong side of a war. Rand had some thoughts in this direction too (ROTP):

> “Give me a child for the first seven years,” says a famous maxim attributed to the Jesuits, “and you may do what you like with him afterwards.” This is true of most children, with rare, heroically independent exceptions. The first five or six years of a child’s life are crucial to his cognitive development. They determine, not the content of his mind, but its method of functioning, its psycho-epistemology. (Psycho-epistemology is the study of man’s cognitive processes from the aspect of the interaction between man’s conscious mind and the automatic functions of his subconscious.)

And in AS:

> “Don’t be astonished, Miss Taggart,” said Dr. Akston, smiling, “and don’t make the mistake of thinking that these three pupils of mine are some sort of superhuman creatures. They’re something much greater and more astounding than that: they’re normal men—a thing the world has never seen—and their feat is that they managed to survive as such. It does take an exceptional mind and a still more exceptional integrity to remain untouched by the brain-destroying influences of the world’s doctrines, the accumulated evil of centuries—to remain human, since the human is the rational.”

How many people come back from a typical childhood to be a normal, rational man like John Galt?

HB:

> A minor comment: Leo did not engage in “crime.” He engaged in the black market, which the state had wrongly declared a crime.

OK, granted.


curi at 3:06 PM on November 11, 2016 | #7550 | reply | quote

Lenin was a failure

Re: Byron Price’s post 14338 of 11/11/16

One-line summary: Lenin was a failure and a mass murderer.

> Being immoral does not diminish intellect nor does being moral make you intelligent.

>

> One need not be moral to be brilliant. Lenin did change the world; I don’t know if he was brilliant but he certainly was able to gather a large following. He was able to impose his will on hundreds of millions of people through his speeches and writings. That makes him effective.

There is an author you might want to read called Ayn Rand. She pointed out that what is morally right is also practical. Lenin could not be both evil and effective.

Lenin was not effective by any rational standard. He failed in his stated goal of creating a socialist society. He murdered millions of people partly because his writings were not persuasive. He couldn’t persuade people to go along so he murdered them.

Nor did Lenin reconsider his ideas once he saw they were garbage. He could have reconsidered. He had heard of Bohm Bawerk’s criticisms of Marx. He could have investigated further and understood many of the problems of Marxism. He chose not to.

Why is Russell immoral for pointing this out?

Russell considered a mass murdering failure a great man, which is immoral.


Alan at 3:59 PM on November 11, 2016 | #7551 | reply | quote

Learning Objectivism is more important than marketing it.

Almost everyone would be better off learning Objectivism to a much more expert level than trying to market it. It’s so easy to market the wrong ideas. And when that happens, you’re ignoring your own education to be ineffective (or even counter productive.)

Ayn Rand had little success getting people to learn Objectivism to the level of being 50% as good as her or more. If you aren’t on a similar level to Ayn Rand (maybe 80% as good as her or better), how are you going to do better than she did at spreading Objectivism? Or is the plan to content yourself with getting a few people to understand Objectivism 10% as well as Ayn Rand did, or getting a lot of people to be 1% as good as Ayn Rand? I don’t think aiming lower like that is wise because idea quality is crucial. And if you are going to aim lower like that, at least be crystal clear about it.

I’d urge everyone to honestly compare themselves to Ayn Rand (and seek out public criticism of their judgement to make it more realistic), and think about how great she was and how limited her effect on the world has been so far. And then think about, considering how much worse than Ayn Rand you are, what are you going to do that’s going to be very effective? And wouldn’t it be better to try to catch up most of the way to Ayn Rand, both personally and also so you can be a better advocate? And if you’re stuck on how to catch up to Ayn Rand more, then I don’t think you’re in a good position to teach others to do it, so I’d suggest trying to get unstuck instead of becoming a teacher-marketer.


curi at 8:02 PM on November 11, 2016 | #7554 | reply | quote

HB didn't answer any of my questions and instead, after not reading long things, complained i quoted twitter

> The references are to tweets on Twitter! I’m not well versed in Twitter, but I don’t see any articles behind these one-liners, which are just assertions. And they are assertions about concretes, such as the Iran deal (which indeed may be worse than Munich).

In my previous post, I asked direct questions, which you have ignored.

You have not been forthcoming about what information you do and don’t already know.

I provided books and pamphlets with details, which you don’t want to read. They are too long?

I provided some brief information about Horowitz’s positions which you were unfamiliar with, and then you tell me not to use short material. Don’t even quote from short material? Or just omit the source links?

I provided an article at the bottom of my post, How Obama Betrayed America, but you apparently don’t read my posts closely because you don’t seem to be aware it was there. I told you the situation and you haven’t engaged with what I said in my post.

How exactly do you expect us to get anywhere like this? Do you want this discussion to go anywhere? Or did you just want to tell me I’m wrong about Ayn Rand, then drop it? So far none of our discussions have come close to resolution before you’ve stopped them. Do you think there are any important topics we disagree about that you would discuss all the way to a resolution?


curi at 8:27 AM on November 12, 2016 | #7555 | reply | quote

The left is acting worse today than in 1980 because they are worse.

> What I can’t quite integrate with this hypothesis is the difference between the current reaction and the reaction to Reagan’s 1980 election victory, which was also unexpected (at least until close to the election). That victory was much bigger than Trump’s. But the initial reaction of both the liberals in the public and the intellectuals was more like, “Oh well.” I heard one liberal TV commentator actually say something like: “We’ve been in charge for a long time and maybe it’s time to give the other side a chance to see if they can do better.” (The need to do better was quite evident: the nation was in a terrible state.)

That’s because you don’t understand today’s left, and don’t want to read about it, as we’ve been discussing in the other thread. Learn about Alinsky and Soros, and you’ll begin to answer your questions. You should take your own stated lack of understanding of events as evidence you should learn more.

And specifically, contrary to your recent claim, today’s left is worse. They are having worse reactions (including riots) because they are worse. You should take their worse reactions as evidence they’re worse! (This is also a nice example of the ambiguity of evidence, and how background knowledge plays a large role in one’s interpretation, as Popper has explained.)

And, yes, the schools being worse is a part of it. And yeah, whether Trump is racist or not (he’s not), the left would call him (and a large portion of Americans) racist either way.


curi at 10:56 AM on November 12, 2016 | #7557 | reply | quote

HB seems close minded, authoritarian, status-valuing. Do you think you will be able to change HB? Why?


Anonymous at 11:20 AM on November 12, 2016 | #7559 | reply | quote

> Do you think you will be able to change HB?

no, i don't expect it. but he has some positive traits and i think i ought to seriously try speaking with HB and HBL. (some people have replied positively to me on HBL and also privately. including HB, who has replied positively to some stuff i've posted.)

if you know some better candidates to try talking with, please share! or, preferably, get them to come to my forums.


curi at 11:23 AM on November 12, 2016 | #7560 | reply | quote

What positive traits does HB have? What are you trying to achieve by discussing with him?

Sorry, I don't know anyone you would like.


Anonymous at 11:52 AM on November 12, 2016 | #7562 | reply | quote

HB:

runs a discussion forum

replies to a fair amount of stuff i say

has personal knowledge of Ayn Rand and other interesting people

is a philosopher

has read a lot of stuff

has longer back-and-forth discussions than most people

is right on a bunch of issues, such as We The Living being a great book, and altruism being bad

can help me understand the orthodox Objectivist culture (which may or may not be what Ayn Rand intended, but is what ARI promotes and a lot of Objectivists today are like)

isn't as fragile about critical argument as most people


curi at 12:05 PM on November 12, 2016 | #7564 | reply | quote

someone made some good comments about the left!

One-line summary: Discover The Networks is a great resource.

Good post. FYI there’s some good info on Marcuse at Discover The Networks which calls him the “Intellectual godfather of the Left.” It’s a better site than the extremely left-biased Wikipedia. It gives this Marcuse quote:

> [T]his society is irrational as a whole. Its productivity is destructive of the free development of human needs and faculties, its peace maintained by the constant threat of war, its growth dependent on the repression of the real possibilities for pacifying the struggle for existence – individual, national, and international.

You can search for Alinsky and Soros on the same site to get tons more info. The Alinksky page begins:

- Identified a set of very specific rules that ordinary citizens could follow, and tactics that ordinary citizens could employ, as a means of gaining public power

- Created a blueprint for revolution under the banner of “social change”

- Two of his most notable modern-day disciples are Hillary Clinton and Barack Obama.

And you can read sections like Soros Meets the Clintons.


curi at 1:34 PM on November 12, 2016 | #7570 | reply | quote

> is a philosopher

A Philosopher like Peikoff is a philosopher? Or unlike Peikoff (who doesn't consider himself a philosopher?

In what way you consider HB a serious philosopher versus another person who says stuff on the Internet?

> has personal knowledge of Ayn Rand and other interesting people

What other interesting people?

> can help me understand the orthodox Objectivist culture (which may or may not be what Ayn Rand intended, but is what ARI promotes and a lot of Objectivists today are like)

Good one.

> isn't as fragile about critical argument as most people

Good one too. How can people learn to become less fragile?


Anonymous at 1:56 PM on November 12, 2016 | #7572 | reply | quote

as far as i know, HB wants to do philosophy and does do philosophy and wrote a philosophy book (How We Know) b/c he wanted to not as repayment of a debt. and HB actually discusses philosophy a lot, and writes about it a lot more than Peikoff, i think.

HB has also done stuff like work on the second expanded edition of ITOE and he did the Ayn Rand Lexicon. the lexicon is good. he's one of the few people who's quoted more different Rand quotes than i have!

how many people who say stuff on the internet say as much about HB and talk about intricate details of philosophy in the things they say? how many have done as much reading as HB and are familiar with as many ideas? how many have as much debating experience as him?

> What other interesting people?

for example he has personal knowledge about George Reisman.

> How can people learn to become less fragile?

http://fallibleideas.com/emotions


curi at 3:54 PM on November 12, 2016 | #7573 | reply | quote

This isn’t the first time that intellectuals have given thinking a bad name.

> I think Americans now fully associate rationality with the tedious concrete-bound scientism and sacrificial anti-civilization attitude of the left, and have no patience for it any more. They have been called anti-intellectual for so long by over-educated pinheads that now they are anti-intellectual.

Without taking a position on how widespread that is today: your description reminds me of the French Revolution.

In Reflections on the Revolution in France, Edmund Burke criticizes the “reason” of the French, and defends the “prejudice” of the common English man. Burke is a pro-reason thinker himself who spent a career advancing (classical) liberal reforms. But the cultural context in 1790 was: a lot of the traditional values and common sense of non-intellectuals, which were decent enough and compatible with civilization, were under fire and called “prejudice.” And meanwhile the mantle of “reason” was being attached to horrible ideas that were incompatible with civilization. (Finding one quote about this, and without understanding the original context, HB thought Burke was an enemy. That kind of reaction has sadly happened to Burke a lot of times. Actually, Burke was right to speak in a way people at the time would understand and he played a substantial role in saving the world.)

Most people aren’t intellectuals and they’re right to be wary of wild, dangerous ideas. People have some willingness to stick with the ideas they know, which built their civilization, and that’s important even though it’s slandered as “prejudice.” It’s good that many people won’t fall in line with the latest intellectual fad they are pressured to accept in the name of “reason” or “science.” In the past this meant a lot of Englishmen saying they didn’t want the ideas of the French intellectuals. Today it involves, as two examples, the rejection by many not-too-informed people of global warming and political correctness even as all the elites order them to accept it in the name of reason and science.

It’s a lot easier to make real progress, and promote genuine reason, when people aren’t being pressured by very bad ideas in the name of progress and reason. A lot of people are willing to consider reasonable reforms and better ideas when there’s no danger, but stubbornly stick to traditional values when there are major dangers. This is a good thing on the whole which protects civilization, even if it can be frustrating to advocates of reason like us. If you want people to improve and be more rational, one big factor is make sure they’re safe, rather than busy defending civilization from existential threats ranging from a nuclear Iran, to filling the USA with unassimilated third world immigrants in order to destroy its culture and values, to ending most usage of fossil fuels to power industrial civilization.


curi at 8:01 PM on November 12, 2016 | #7575 | reply | quote

Discussing Burke

You can’t trust secondary sources on Burke, many of which slander him. (You also can’t trust secondary sources on Rand, or on Popper, or on most interesting people.) Here’s an example of a Burke scholarship error by Thomas Sowell.

My knowledge of Burke comes from reading lots of primary sources, plus 15 secondary source books. Reading primary sources is necessary to see which secondary sources are correct or not. Only one of the secondary source books was really good, and all the rest were either OK or bad.

The really good Burke secondary source book is The Great Melody: A Thematic Biography of Edmund Burke by Conor Cruise O’Brien. If you read it you’ll discover Burke had a lot more than a “rudimentary understanding.”

I didn’t read The Conservative Mind because I focused on books that were solely about Burke. You should be aware that Kirk wrote a book called: Edmund Burke: A Genius Reconsidered. In it he attacks Burke with comments like, “[Edmund Burke] was not a man of the enlightenment” (p. 151.)

In Russell Kirk: American Conservative, by Bradley J. Birzer, p. 135, it says Kirk strategized to control the narrative about Burke against Conor Cruise O’Brien (the author of the Burke book I recommended). I’d urge you to read another take before accepting Kirk’s claims.


curi at 9:19 PM on November 12, 2016 | #7576 | reply | quote

ppl refuse to read much

Kirk convinced you that Burke is not important, not original, not a philosopher, and not pro-reason. Those are all slanders and all false. But at the same time you say Kirk can’t be slandering Burke because Kirk says he’s a Burke proponent. You’re being fooled in the same way as when Hayek claimed to be a great proponent of capitalism, but actually betrayed capitalism.


curi at 10:19 PM on November 12, 2016 | #7577 | reply | quote

Did you read HB's book "How we know"? Is it any good?

I didn't know HB was the person behind the Ayn Rand lexicon. Yes, that is really good. One can look up easily what Ayn Rand actually said about each concept. Also good to find where Ayn Rand said what.

> how many people who say stuff on the internet say as much about HB and talk about intricate details of philosophy in the things they say? how many have done as much reading as HB and are familiar with as many ideas? how many have as much debating experience as him?

As I said, I don't know anyone. HB alsomany bad traits. He is authoritarian and would get you locked up if he deemed you insane. :(

Keep looking.


Anonymous at 8:36 AM on November 13, 2016 | #7579 | reply | quote

one can keep looking, and also talk with HBL, at the same time. (and anyway posting on HBL is looking b/c the audience can contact me.)


Anonymous at 10:59 AM on November 13, 2016 | #7580 | reply | quote

Did anyone interesting contact you?


Anonymous at 8:37 AM on November 14, 2016 | #7581 | reply | quote

Any screencast criticism?

People here say I’m making major thinking mistakes. I’ve exposed my thinking to criticism. I’ve put substantial effort into this. I’ve written a lot here and on my websites. I’ve made a bunch of videos including this one. Can someone tell me something in the video I’m doing wrong? Particularly a methodological issue.

Does no one have a criticism?

Or are people uninterested? If it’s disinterest, what would get people interested in critical discussion to a conclusion where someone learns something? What kind of material, if shared, would people engage with? (Please don’t respond with vague generalities about what the group might do. I want to deal with individuals who take responsibility for what they say. Please don’t suggest something unless you personally would engage with it at length.)

PS Regarding evolution I wasn’t talking about how evolution was discovered, I was talking about the content of evolution and how the content connects to epistemology.


curi at 5:37 PM on November 14, 2016 | #7585 | reply | quote

HBL Moderation

HBL doesn't allow posting about mises.org, George Reisman, or Ayn Rand Contra Human Nature. Not even to post criticism of them. He thinks they are so bad that he doesn't want to help anyone know they exist.

But HBL has lots of references and links to the Huffington Post, Nate Silver, and Paul Krugman.

Why the double standard? Why is extreme leftist evil acceptable to post about, but mises.org is over the line? mises.org isn't worse than Huff Po. Reisman isn't worse than Krugman or Silver. This is really fucked up.

It's partly personal grudges which control what you can and can't say on HBL. HB's policy of applying his personal grudges to the forum makes it much less of an open discussion place than it could be.

But I think it's also a lot of leftist sympathy and overestimate the left. HB claims he doesn't want to sanction various evils by promoting them by letting them be linked, but then he lets extreme evils be linked to. Why? I think because he denies they are extreme evils. His control over HBL content is done according to his own large mistakes about politics.

And what happens when you try to correct his overestimation of the left? Your posts don't get emailed out to the HBL members.

Besides the stuff HB will actually delete from the forum entirely, there's a lot more stuff HB chooses not to email out. Most HBL members don't read the forum, they only read emails. HB doesn't email out many of the posts challenging or criticizing him.

HBL is nothing at all like a free speech zone. And it has a massive leftist slant in what content is allowed. And there's no clear written rules for what you can and can't say, it's all unpredictable, unexplained arbitrary discretion.


curi at 6:11 PM on November 14, 2016 | #7587 | reply | quote

HBL Moderation

"HBL doesn't allow posting about mises.org, George Reisman, or Ayn Rand Contra Human Nature. Not even to post criticism of them. He thinks they are so bad that he doesn't want to help anyone know they exist."

"But HBL has lots of references and links to the Huffington Post, Nate Silver, and Paul Krugman."

I suppose its the heretics are worse than pagans theory. Reisman was friends with Rand and seems to be a dogmatic objectivist. Is it just because he had a falling out with Peikoff?


Neil P at 5:59 AM on November 15, 2016 | #7588 | reply | quote

reply to HB about my screencast

> hard to hear what you’re saying

What do you mean hard to hear? Do you literally mean a problem with the volume? If so that sounds like a technical issue because I can hear it fine with my computer volume at 50% and youtube at 75%, nowhere near max, and I know some other people listened to it successfully. Do you mean my accent? My speaking pace? I’m guessing the trouble is with understanding words (rather than actually hearing me), but the comment is unclear.

Would something be wrong with a “dare”? What?

If anyone can point out a mistake I’m making, I would greatly appreciate it. As always, this requires explaining it in a way that I can understand and see it’s right (while making an effort to understand it and see the value on my end.)

I asked what would work instead. You haven’t answered.


curi at 9:25 AM on November 15, 2016 | #7592 | reply | quote

You tend to mumble a bit in your videos, maybe because you are thinking aloud.


Anonymous at 9:46 AM on November 15, 2016 | #7593 | reply | quote

i don't claim to be a great speaker, and i've noticed sometimes i trail off so the last couple words before i stop talking can be hard to hear. i also think my accent is strong to some people. and i sped the video up to 125% which may be an issue here, i don't know. plus my baseline talking speed is fast at times (it's somewhat uneven pacing too, i think. which i don't think is a big deal. but it means there are faster parts that are harder to follow than the slower parts.)

if it's just mumbling, i think that would affect particular sections and still leave the substantial majority understandable.


curi at 9:51 AM on November 15, 2016 | #7594 | reply | quote

Speeding is not a problem to me as I've been practicing speeding videos to the point some people sound more natural if sped up now, but it could be a problem to others. Your accent is indeed strong, and it sounds a bit "snob" or "gay" to me. Maybe that can be off putting to some people?


Anonymous at 10:00 AM on November 15, 2016 | #7595 | reply | quote

> Your accent is indeed strong, and it sounds a bit "snob" or "gay" to me. Maybe that can be off putting to some people?

that would not surprise me. accents are common and often sound strong to others, even if people don't know they're speaking with an accent. it's to be expected due to language variance. welcome to northern california!

i think people can understand me OK if they care to. lots of accents are harder to understand, especially foreign ones, but people often manage anyway.


curi at 10:06 AM on November 15, 2016 | #7596 | reply | quote

i'm guessing HB doesn't understand my speech primarily because he isn't very interested. just like how he doesn't understand my text either! i guess he went into the video ready to find an excuse not to watch it.

i'd still be interested in making my speech easier to understand. i don't think trying to change my accent would be worth the effort though. i have tried a little to trail off and mumble less -- i find that a more approachable and interesting problem.


curi at 10:13 AM on November 15, 2016 | #7597 | reply | quote

trying meta discussion since none of the discussions went anywhere

One-line summary: Anyone want to try a serious discussion to a conclusion?

If you find a point of disagreement you both consider important, and discuss it to a conclusion, then at least one person learns something major. That’s really great. And most conversations have little value for anyone, so this is better.

Why don’t people do this more? Does anyone here want to do this? This is what I primarily look for in discussions. So far no one has here has seemed to have an approach to discussion anything like this. Despite my efforts, there hasn’t been persistence to resolve anything. People give up long before resolving much.

Don’t people care about answering disagreements? When you don’t answer criticism of your views then you could be wrong, and people could know you’re wrong, and you could just be irrationally persisting in error. All criticism should be publicly answered, rather than ignored. There’s no excuse to ignore criticism of your position instead of address it.

I don’t like hit-and-run discussions where putting effort into the matter (e.g. reading books and writing new essays about it) won’t be productive for the discussion because people don’t care to take it to a conclusion. That ruins discussion quality and seriousness.

I’m willing to talk with people who disagree with me, to the point of actually resolving a dispute. I’m willing to answer criticism. Are any of you?

Do people not understand these concepts and need it explained how you can answer criticism, resolve disagreements, etc? Do people disagree and, in accordance with their philosophy, didn’t write their disagreement down and won’t state it? Or do people just deal with discussion in a haphazard way? Or what?


curi at 9:42 PM on November 15, 2016 | #7610 | reply | quote

someone (not HB) arguing against answering much criticism, on principle!

One-line summary: Not addressing criticism is evasion.

A criticism is an explanation of a flaw with an idea. That’s of value to your life because it helps you get ideas with fewer flaws.

You have no way to know which ideas are correct without addressing criticism. Every criticism is a potential error on your part, and you can’t know if you made an error or not without addressing the criticism.

If you skip some criticism, it’s a refusal to think about those points.

If you address a criticism in a way the critic can’t make followup points, such as dismissing it in your mind while saying nothing and moving on, then you’re blocking most criticism. You’re refusing to think about any followup criticism.

Followup criticism is a crucial part of criticism. Most important criticisms are not communicated in the first round of discussion. They require some clarifying questions because communicating is hard and misunderstandings are very common even when people aren’t trying to criticize. And followup criticisms require you saying your answers to the first few criticisms (which people usually have, but which often vary between different people) so the critic can see which other things you don’t know and expand his criticism.

What about replying in private to a critic?

Addressing criticism in public is the best and standard approach because it allows for the criticism, and/or your reply, to be referenced in future discussions. This is a huge time saver. What if the same criticism is made twice? If you or anyone else has written down a reply, then the one answer can be used twice. Most criticism isn’t new and should be answered by reference to existing answers. (A key here is that you take responsibility for any answers you use, even if someone else wrote them. If it’s wrong, you’re wrong, and you should take that seriously just like when you make other mistakes.)

Addressing criticism in public also allows other critics to follow up. It exposes your ideas to criticism from the public instead of just from one person. That allows more people to help you learn better ideas. And it allows more people to learn from your ideas. Many potential critics could read your answer and change their mind themselves.

And, when it comes to impersonal ideas (which is most of what’s interesting to discuss), there’s more or less no downside to answering criticism in public rather than writing to a critic in private. Why wouldn’t you do it that way? What have you got to hide about your philosophy views, your economics views, your politics, etc? (OK there are jobs where your politics could get you fired. Maybe you’ll say that’s an exception, but I don’t think it’s relevant to these HBL conversations. And why take one of those jobs, anyway?)

If you don’t have a general policy of dealing with all criticism, the result will simply be you’re avoidably wrong about many issues where people knew better and were willing to tell you. If you don’t do it in public, you’re hiding from critics. If you claim thinking about counter-arguments to your ideas is not an efficient use of your time, you should improve your skill at how you do it rather than simply refuse to think about a bunch of known reasons you’re mistaken. There are solutions for how to do it in a time-efficient manner. Are you interested? Do you need help with that? Ask if you want. And when you hold positions shared by many, there ought to be plenty of time-saving answers already written by others that you can reference (and would want to have read yourself to check the correctness of). If there aren’t already great public answers by the many people who agree with you, what’s going on? And if you hold ideas that few others agree with, that’s fine except it does mean there’s more burden to check the ideas and address criticism yourself; it’d be foolish to hold a tiny minority position and then not address the reasons people disagree with it.


curi at 9:59 AM on November 16, 2016 | #7612 | reply | quote

My 2009 comments on The Comprachicos.

Original: http://curi.us/1448-the-comprachicos

*Paragraphs in italics are additions today.*

These are my comments on The Comprachicos, an essay by Ayn Rand found in http://www.amazon.com/New-Left-Anti-Industrial-Revolution/dp/0452011256

This will make a lot more sense if you read it first. It is not a summary, and it leaves out a lot of good ieas from the essay.

I agree with Rand's pro-children attitude, as opposed to the usual more hateful one. Rand says young children should start learning abstract ideas, and I agree with her.

I agree with her criticisms of "the pack" and conformity and collectivism, and her view that the "problem children" often have the best chance to get through school with their reason in tact.

I agree with many of her specific examples about how some methods of teaching are nonsense, or contradict the educational philosophy the teachers claim to follow. I disagree with her apparent assumption that most of the effects and meaning of teaching methods can be discerned by looking at them and reasoning about them. I think that the bulk of what's done to kids is more subtle than that. And I think kids are resilient and such blatant methods, alone, are not enough to have the affects schools do have.

Rand only mentions parents briefly. She says mistakes of this size aren't made innocently. I don't agree with that logic. I do agree with her assessment that many parents want to get their kids out of their hair, and don't think carefully about what sort of place they are sending their kids, and also don't have thoughtful, rational discussions with their kids.

*Making mistakes is a typical human activity. Some of our mistakes will look huge to people in the future who are much wiser than us. The size of a mistake doesn’t tell you about the guilt involved. (Like Rand’s point that the issue with government is whether it follows its proper role, or meddles where it shouldn’t, not size as such.)*

Rand takes a fairly nature oriented position on some aspects of the nature/nurture debate. She does talk a lot about how education matters, but she also seems to think being more or less intelligent is innate

*I think how intelligent healthy people seem is a matter of how good their ideas are, it’s not some innate function determined by their genes.*

Rand sometimes appeals to "the evidence" or "scientific research" but fails to cite it or explain what research was done and how it is capable of reaching the conclusion it reaches. This is scientism, but it's mild and she provides arguments for all her conclusions.

*I don’t think brief appeals to the science supposedly being on your side are productive today. Maybe the culture surrounding this stuff was different when Rand wrote this.*

*Saying the science is on your side is easy, whether it is or not. And for issues like these, the state of the science is generally controversial.*

Rand overestimates how much teachers hurt children *intentionally*. She thinks they somewhat plan for it. Alright, some do, but they don't actually know how to plan for things and then make them happen, so their planning hardly matters. Rand makes a comment that if they cared about the children they'd notice certain policies are harming children and stop or revise them, and concludes they don't care about children's well being. I disagree with that. I don't think they know how to evaluate what works and what doesn't. Doing that takes skill which they don't have. They have no idea if they are doing harm or not. I don't want to absolve them of all guilt, or even any guilt -- they do see crying children, and they definitely know that many children dislike much of what they do -- but let's not assume they know, plan, or intend more than they do. They are clueless and helpless, and have a mix of callous disregard; superficial, tender love and caring; some meanness; and for many teachers, especially the younger ones, only occasional hatred of the children. Many teachers have given up and don't think about what they are doing.

*Controlling people is hard. Teachers and parents often think they are in control, but would quickly lose control if they didn’t follow traditions and memes. The knowledge to control people has been developed by many people over a long time, and it only works within limits. It’s fragile if circumstances change or the authority starts giving orders of the wrong types. Teachers and parents are themselves subject to a lot of control from their culture, their boss, and the very traditions and memes they are using to control children.*

Rand says schools and culture used to be better and more rational, and the comprachicos only gained control quite recently, and the current educators had a better education themselves. I disagree. Rand doesn't go into detail here. It's true that schools have changed in some ways, and their explicit rhetoric has changed, but I see no reason to think their basic effect has changed. Perhaps Rand is going too much on the schools' explicit messages. If anything, school has gotten better. People are smarter now, and more capable; we can tell because they deal with more complex lives, have more possessions which are more complicated (like computers), there are more knowledge workers, and GDP per capita is much higher. And schools have had reforms, e.g. with corporal punishment. And we now have more and better sources of information (TV, internet, more books, etc).

*The leftist takeover of schools to use them for propaganda is a separate issue than their basic nature of schools to make kids obey authority, break their spirits, and get them to “listen” and believe whatever they’re told to believe. Schools would still be brutal without the leftists.*

Rand does a good job of emphasizing how much of a child's learning is inexplicit, and how much of what is taught is inexplicit (for example, she discusses the emotional vibe of the pack). And I agree with her comments on whim.

I agree with Rand's mentions of the *boredom* of school.

I agree with Rand that the primary way to do well in the pack is to learn to manipulate human beings, and this is disgusting, and not something an individualist would want to do. I agree that "socializing" and "fitting in" are wicked.

I liked Rand's comment that non-conformist children have *no one* on their side. Not even themselves, because they don't have much understanding of the nature of their battle. However, she's slightly mistaken: they have Rand on their side! She does indeed sympathize with them. Good for her. And I do too.

I don't agree with Rand's assumption about the developmental status of children being very strongly tied to age. She even mentions that is false at one point by saying children of the same age and intelligence can be at significantly different levels of development if one is educated well and the other isn't. Yet she still refers to what three year olds need, what five year olds need, and so on. (And it's not even clear if these age numbers refer to normal children or properly educated children.)

I generally agree with Rand's comments about how people automate large parts of their thinking. For example, Rand says you have to learn to focus your eyes, or to coordinate your muscles to walk. And this isn't obvious or trivial. Rand says we learn a huge amount in our first two years, and if any adult could learn as much, as quickly, or as well he'd be a genius. But adults have automated the process so much it seems easy.

I agree with Rand that fakers -- for example people who pretend to agree with the pack when they don't -- often become fakers by habit, and then live that way without thinking, and it becomes a major part of them, and the "real" self gets lost and forgotten.

Perhaps my favorite part is on page 197:

> At the age of three, when his mind is almost as plastic as his bones, when his need and desire to know are more intense than they will ever be again, a child is delivered -- by a Progressive nursery school -- into the midst of a pack of children as helplessly ignorant as himself. He is not merely left without cognitive guidance -- he is actively discouraged and prevented from pursuing cognitive tasks. He wants to learn; he is told to play. Why? No answer is given. He is made to understand -- by the emotional vibrations permeating the atmosphere of the place, by every crude or subtle means available to the adults whom he cannot understand -- that the most important thing in this peculiar world is not to know, but to get along with the pack. Why? No answer is given.

> He does not know what to do; he is told to do anything he feels like. He picks up a toy; it is snatched away from him by another child; he is told that he must learn to share. Why? No answer is given. He sits alone in a corner; he is told that he must join the others. Why? No answer is given. He approaches a group, reaches for their toys and is punched in the nose. He cries, in angry bewilderment; the teacher throws her arms around him and gushes that she loves him.

I like the "Why? No answer is given." theme.

I think Rand's comment that loneliness is only for people who have something of value to share, but can't find any equals to share it with, is insightful. She says the emotion that drives conformists to "belong" is fear. I'm not so sure about that. I think fear plays a role, but there are many other issues, such as not knowing what else to do, and thinking non-conformity is morally wrong.

Rand hates: Kant, John Dewey, Marcuse, Hegel, Logical Positivism, and Language Analysis.

*Rand proposed Montessori schools as part of the solution. I disagree. Here’s my criticism of an Objectivist presentation about Montessori.* http://curi.us/1793-ray-girn-the-self-made-child-maria-montessoris-philosophy-of-education


curi at 10:35 AM on November 16, 2016 | #7613 | reply | quote

Textbooks selection isn’t a great opportunity.

Trying to influence textbook selection is hard and awful. Read Richard Feynman’s experience with it: Judging Books by Their Covers in Surely You’re Joking, Mr. Feynman!

And textbook selection isn’t a low hanging fruit. The left already knows about it. There’s already effort going into controlling it.


curi at 11:17 AM on November 16, 2016 | #7614 | reply | quote

i'm now banned from posting to HBL. i can still read it for now. details later.


curi at 12:39 PM on November 16, 2016 | #7619 | reply | quote

HB Announces My Ban

The following is the full forum post by HB with the uninformative title "Administrative note".

One-line summary: I have removed Elliot Temple’s posting privileges

After much consideration, I decided to remove Elliot Temple’s posting privileges. His posts were not adding value to HBL, and they were: 1) coming from an alien context, 2) nearly always filled with wrong ideas–sometimes startlingly wrong (your eyes are, he says, “opinionated”)–ideas not well argued for, 3) combative, and 4) skating on the edge of violating our etiquette policy. They also were often too long.

All in all, I began to cringe when I saw his name on a post. Instead of the question “Is anything he’s written actually bad enough to take away his posting privileges?” I realized the question was more, “Why do I want him posting on my list, if almost every post brings me grief?”

After I made the decision, but before he knew of it, he posted a piece charging our dismissal of many of his “criticisms” as evasion–the cardinal sin for Objectivism. But, again, I read that only after reaching my decision.

In private email, he asked me to post the following for him:

> 1) I’ve been banned from posting to HBL, so don’t expect me to reply anymore.

>

> 2) It’s not my choice to end the discussions. I didn’t give up.

>

> 3) If anyone wants to continue a discussion, email me ([email protected]). I’m happy to continue any of the discussions and respond to outstanding points, but only if people choose to contact me.


curi at 10:00 PM on November 16, 2016 | #7629 | reply | quote

It's pathetic that HB's best example of me being wrong is a statement of basic science in terminology he found confusing. I put a lot of effort into clarifying various terminology for him, but to little avail.

Apple designs cameras in opinionated ways. There are made tradeoffs they make in the design of the hardware and in the software processing which changes the image before you see it. The software processing is like red eye removal or other touchups to change the image, but automated.

Our eyes are opinionated in the same sense – they have design tradeoffs and they alter images (kinda like touchups) before we see the final result.

With an iPhone, I believe apps can now access the raw camera data. In the past they couldn't. With our eyes, we can't access the raw data. We only have the final, edited image data. It's important to recognize it's basically gone through several filters designed in an opinionated way (to be good at some things, bad at others, according to some particular design goals that could have been done another way).

In this case it's not a human designer with a human opinion. It was designed by genetic evolution. That's still a knowledge creating process which created eyes which are adapted to some purposes and not others, in the same way Apple makes opinionated cameras good for some tasks and not others.

Here is my original passage in which I stated my meaning clearly in the first place. HB's misunderstanding is not innocent.

> As Popper put it: all observation is theory-laden. You need theories first. Raw observation is both impossible (because e.g. our eyes are opinionated--they let us see green but not infrared) and worthless (because there're infinitely many characteristics and patterns out there that one could observe).

Eyes let us see green but not infrared. They are designed a particular way. This is HB's best and clearest example of me being wrong?

I guess he decided to misunderstand me as meaning eyes are conscious? But I would call an iPhone camera opinionated too without meaning it's conscious. HB never bothered saying how he was reading my statement.

HB replied with a wild accusation:

> Is this serious? As stated, it is wild primacy of consciousness.

And I replied clarifying again:

> How so? There are many different possible designs for eyes, and we have a particular one with various strengths (can see green) and weaknesses (can't see ultraviolet). This isn't a claim about consciousness.

And HB didn't reply but, apparently, continued to hold this against me while disregarding what I say I mean. Seems dishonest.


curi at 10:13 PM on November 16, 2016 | #7630 | reply | quote

notice how HB is offended by *accusations* of evasion, without bothering to address whether they are true or deal with my *arguments*.

HBL people, including HB, do not do paths forward. typical evasion! and HB banned me rather than discuss paths forward. now he'll continue evading the whole issue of paths forward, and then also continue evading many particular topics (e.g. that induction doesn't work).


curi at 10:15 PM on November 16, 2016 | #7631 | reply | quote

no warning

there was no warning before HB banned me from posting. that's super lame.

i also think it's really lame that i wasn't banned for violating any rules, just for being a disliked critic. and there's no posted policy telling all prospective members that HB bans whoever he feels like just because he thinks they're wrong or otherwise doesn't like what they have to say.


curi at 11:02 PM on November 16, 2016 | #7632 | reply | quote

HB's own story is basically he couldn't come up with a good excuse to ban me even though he wanted to. so then he did it anyway.

i respect his property rights, but i don't respect his unwillingness to deal with criticism, philosophy discussion and paths forward. and i don't respect his moderation policies. ban whoever you feel like is a bad way to run a discussion group, and especially bad when you lie to the public about the nature of your discussion list. i read how HBL works before posting. i wasn't informed about this possibility. it's not a documented policy. i was led to believe i would be allowed to discuss without a special exception for if HB didn't like my ideas and claimed i was mistaken.


curi at 11:09 PM on November 16, 2016 | #7633 | reply | quote

> i'm now banned from posting to HBL. i can still read it for now.

Sorry, I did see it coming. I don't know if i should have advised you to slow down.


Anonymous at 7:25 AM on November 17, 2016 | #7641 | reply | quote

I saw it coming too. This isn't my first ban. Slowing down was not a solution.


curi at 10:10 AM on November 17, 2016 | #7643 | reply | quote

HB didn't allow me to invite people to my forum to continue discussions he interrupted.

I, of course, do allow people to post about other forums on my forums.

So far the replies to HB's announcement are just cruel attacks on a muzzled person. People start being aggressive jerks the moment I can't reply calling them out for being jerks, criticizing their immorality, etc! It also shows they were being very dishonest in their discussions with me instead of expressing their real opinions.


curi at 10:51 AM on November 17, 2016 | #7644 | reply | quote

> I saw it coming too. This isn't my first ban. Slowing down was not a solution.

Why not?


Anonymous at 11:33 AM on November 17, 2016 | #7645 | reply | quote

> So far the replies to HB's announcement are just cruel attacks on a muzzled person.

What are they saying? My free trial is over.


Anonymous at 11:46 AM on November 17, 2016 | #7646 | reply | quote

How would saying things they don't want to hear, more slowly, have made them happy to hear it?


Anonymous at 11:54 AM on November 17, 2016 | #7647 | reply | quote

> How would saying things they don't want to hear, more slowly, have made them happy to hear it?

My thought is it would give them more time to adjust to something foreign to them, to think it over, to not feel invaded and overwhelmed.


Anonymous at 8:29 AM on November 18, 2016 | #7649 | reply | quote

Hb seemed to like you quite a lot when you posted your analyses of We the Living. It seems he changed his mind tons in a few days.


Anonymous at 3:56 AM on November 19, 2016 | #7674 | reply | quote

i'm writing my HB and HBL criticism blog post. have done several drafts. if anyone wants to read a draft copy, email me quickly and you may be able to give feedback before it's posted.


curi at 4:58 PM on November 19, 2016 | #7681 | reply | quote

i've been banned from HBL. i wrote a blog post explaining. it criticizes HB, HBL, passive minds, and more!

http://curi.us/1930-harry-binswanger-refuses-to-think


curi at 11:50 AM on November 20, 2016 | #7683 | reply | quote

#7085

>Popper is a worse communicator than Rand.

Doesn't this imply Rand being a bad communicator?

Wouldn't it be better to formulate the sentence the following instead? "Popper is not as good of a communicator as Rand."

Or do you think that Rand is a bad communicator? Based on what I have read on curi.us I think you do not hold that position (Rand being a bad communicator).


N at 9:30 AM on May 22, 2019 | #12474 | reply | quote

#12474

>>Popper is a worse communicator than Rand.

> Doesn't this imply Rand being a bad communicator?

No, AFAICT. Why do you think that?


Alisa at 10:30 AM on May 22, 2019 | #12475 | reply | quote

I was mistaken. I did the wrong conversion from English in my head and got it mixed up (I roughly made it into "even worse than" instead of what it actually said).

https://writingexplained.org/worse-or-worst-difference


N at 12:46 PM on May 22, 2019 | #12476 | reply | quote

#12475

> No, AFAICT. Why do you think that?

In Swedish there are two different words for "bad". One that is similar to the way it is used in English, and one that is used on negative things that are considered bad to begin with. For example diseases . I read it as the latter. I was mistaken and should have re-read before posting.


N at 1:14 PM on May 22, 2019 | #12477 | reply | quote

> When Popper rejects "irrevocably true statements," he's saying that in a future context we may get new information and change our minds. Popper thinks of this in terms of fallibility. Whatever we do, we may have made a mistake. Popper also thinks of revising ideas when we get new information in terms of correcting mistakes. **Popper is unaware of the Objectivist perspective that older ideas remain contextually true. Popper's point is our ideas are never final; further thinking, progress and improvement are always possible.**

This was really helpful to me. I was struggling with what to me seemed like contradictions with Objectivism and looked like skepticism in Popper.


N at 1:28 PM on May 22, 2019 | #12478 | reply | quote

I don't think Popper would like "contextual certainty". Contextual certainty is confusing because you could always make a mistake *within your context*. You might not use the best available knowledge. You might be dishonest. There's no way to be certain you didn't make an avoidable mistake. There's no way to prove your conclusion is the best conclusion possible to know at that time, in that context. So it'd be better to call it "contextual knowledge" but not "contextual certainty" or "contextually true" because you have no way to guarantee it's true even contextually.

The important thing is that knowledge is judged contextually, not non-contextually. You shouldn't demand that ideas meet a standard of omniscient perfection. We always act in the context of our lives and must judge ideas to act on by the standard of whether we judge that they're the best available idea to act on.

Another key part of the context of an idea is the purpose of the idea. If my goal is to get to point A, then you can't criticize the idea for failing to arrive at point B, even if you think point B is a better place. You can criticize the goal itself, but that's different than criticizing the idea about how to accomplish the goal. There's also the general context of your values and life goals, e.g. I generally wouldn't want a method of getting to point A that costs so much money that I become poor. The issue here is that only some criticisms are relevant in context. Some "errors" are not relevant errors in the context.


Dagny at 5:16 PM on May 22, 2019 | #12479 | reply | quote

#12479 Expanding on this:

So there's both the *narrow context* (the goal of this particular idea) and the *broader context* (my life, my values, my other ideas) to consider. Criticisms have to be contextually relevant. A criticism has to say why something fails at its goal (narrow context error) or causes problems for my in some other way that I care about (broader context error).

You can also phrase this without the word "criticism", e.g. "An error has to fit the narrow or broad context or else it isn't an error."

When we learn new things, we often find out that some of our old ideas were errors (they didn't achieve their goals or they harmed our lives in some other ways that we didn't realize). Some were just less good than they could be, and some were worse than nothing. But that doesn't mean it was an error *to act on or accept* that idea at that time. If you did your best, honestly, without evasion, etc. (which is rare), then *you* didn't make an error even if the idea is false.

(Doing your best doesn't mean putting in unlimited effort. Just an appropriate effort level given the importance of the issue and the other things you have to do with your time.)


Dagny at 5:26 PM on May 22, 2019 | #12480 | reply | quote

Want to discuss this? Join my forum.

(Due to multi-year, sustained harassment from David Deutsch and his fans, commenting here requires an account. Accounts are not publicly available. Discussion info.)

Page loading slowly? View only the latest 30 messages.