Buy my new Educational Videos: Reading George Reisman's book on Marxism and Socialism for $150. Educate yourself on capitalism and socialism. Click the link for details.
Buy my new Educational Videos: Reading George Reisman's book on Marxism and Socialism for $150. Educate yourself on capitalism and socialism. Click the link for details.
Austrian Theorizing: Recalling The Foundations by Walter Block:
“Indifference,” for the Austrian School is a technical word. We deny that indifference is compatible with human action, the attempt to render the world a more preferable place than would have occurred had no such act taken place. Were a man truly indifferent between state of the world A (the one which would ensue without his intervention) and state of the world B, he would not act so as to make the latter more likely.
We do not deny, however, that “indifference” also has a perfectly reasonable usage in common parlance. In ordinary language, a person could be readily understood to be indifferent between wearing a green or a blue sweater. This means that he doesn’t care much which one he chooses. Given that he will only wear one of them at a time, and chooses the green, he is still reckoned, speaking loosely, to be indifferent between them, because we can readily imagine him picking the other.4
But if we were to “get technical” about the matter, it would be at the very least extremely puzzling for a man to select the green sweater in preference to the blue if he were truly indifferent between them. Indeed, this would be nothing less than a logical contradiction. If indifference were his exact mental state, surely he would select neither article of clothing. As in the case of Buridan’s Ass, who avoided both piles of equidistant hay, he would eschew both sweaters.5 Very much to the contrary, if when presented with both the person selected green instead of blue, we as outside analysts, or economists, would be entitled to infer from this act a preference for green.
People don't get stuck like the donkey who can't choose between two equidistant piles of hay. They do other things. When they are indifferent they can use a tie breaker, even an arbitrary one like a coin flip. Then they can say, "Yes I chose the green sweater, not the blue one. But that doesn't mean I prefer it. I chose it by coin flip."
When people are genuinely indifferent but prefer to make a choice rather than do nothing, they act accordingly – they make the decision somehow, such as with a coin flip or by examining their preferences more carefully and discovering they were only indifferent to some level of precision but not infinitely indifferent.
Coin flips are very well known. Why doesn't Block address this? It's true that the coin flipper preferred the coin flip method over alternatives like sitting there unable to choose. But that is a different preference than preferring a green sweater to a blue sweater.
Block also specifically covers the sweater example:
Our author’s second sally (1999, p. 826) is “One can only observe that I choose a green sweater, but this does not rule out the possibility that I was actually indifferent between a green sweater and a blue sweater.” In common parlance, we are certainly prepared to accept Caplan’s introspection on the matter. Presumably, he has no strong preference for the one color over the other. But as a matter of technical economics, we are hard put, on the basis of indifference, to account for the fact that he did indeed pick up and put on the green sweater, when he could have had the blue. Perhaps the green one was on top of the blue, and some slight additional effort would have been necessary to wear the latter; perhaps they were side by side, but at the last minute, even thinking he was fully indifferent, he veered toward the green based on a very slight, perhaps even unconscious preference. All we know is that he dug into his sweater draw, and came up with the green one. What else are we to infer but that he preferred his color?
Block seems to be missing the point that you could prefer the top sweater which is a different thing than preferring the green sweater. In the last words of the paragraph, Block summarizes a preference for a more convenient sweater as a preference for "his color" (that is, the color he chose). But that's silly. He may not have been choosing a color of sweater, but a proximity of sweater, as Block himself just said. (Caplan, the inferior thinker who Block is arguing with, simply fails to provide any notable analysis of this matter to discuss. But Block, so starved of discussion – after I was ejected, years ago, from the Mises email group (where Block participates) for lacking the authority of credentials – is pleased to have Caplan to talk with, as he says in the introduction.)
As Critical Rationalism explains, actions don't speak for themselves anymore than data does. It takes intellectual interpretation to figure out why a man took a particular action. Block seems to know this when talking about getting into the mind of the economic actor, but seems to forget it in this part about the sweaters. When you get into the mind of the actor and try to understand his purpose, you may discover he wasn't choosing by color.
Did he take that action because he preferred green over blue, or because he preferred a closer sweater? The action of picking up a sweater alone can't tell you on which basis the man made the choice (color, location, something else, or a mix), only that, in that instance, he preferred the green-and-closer-and-many-other-things sweater over the alternatives (he preferred the whole bundle of traits that he chose). The bundle of traits people chose is always infinite, so it takes explanations and critical thinking to determine which traits were important to the person's preference. Such analysis matters because people have preferences which are not unique to each individual choice, but instead allow them to make many choices according to common themes. These ongoing, persistence preferences are key parts of a person's life/personality/thinking which enable him to do any planning regarding the future, and enable him to be understood by others.
Read our discussion about correlation studies (24 page PDF). They don't work, they're full of inductivism, and most social "science" is counter productive.
Read our discussion about religion, anti-depressants and more (55 page PDF). It includes criticism of the violent, leftist ideas of Sam Harris and how they are worse than Bernie.
Here's the link to join the Discord chatroom yourself. It's public.
Andy asked about his reddit topic, So how does Diversity make America strong?
curi: diversity of skills is a strength in economics.
curi: diversity of ideas/perspectives helps with brainstorming
curi: diversity of skin colors is ... a racist goal?
curi: diversity of cultures is somewhat related to intellectual diversity, and also it's good to see alternatives to one's cultural customs
curi: but that applies way more to chopsticks than to learning from other cultures how to more brutally control children or wage wars
curi: anime is good diversity, different way of using TV and art than the US does.
HeuristicWorld (Andy): What about creativity, do you think it boosts it?
curi: lots of these benefits are available via travel, book translations, the internet, intellectual collaboration with foreigners, global free trade, etc
curi: ppl don't rly know what they mean by "creativity". very vague
curi: and the assumption it comes in degrees is also not thought through
HeuristicWorld (Andy): I am thinking of "creative" problem solving. Like interesting solutions to engineering problems that is unique to the US that is not true of more homogenous cultures.
curi: if ppl were more rational, they could learn more from japanese approaches to certain problems. some ppl have tried but they could do a better job.
HeuristicWorld (Andy): Are you thinking of something specific or their overall approach to problem solving
curi: but ppl also often fail to learn much from English materials, so shrug
curi: there are lots of examples including toyota production system
curi: and that rent in tokyo is lower than in SF
HeuristicWorld (Andy): How did they manage that?
curi: by building homes
curi: "A seemingly silly gesture is done for the sake of safety." It's very good that their culture teaches people to act properly instead of sacrificing safety to try to avoid looking silly.
curi: it's not just trains. there are related things for software engineering to prevent mistakes.
curi: on the other hand, salaryman culture sucks
curi: and their culture makes it hard to express disagreement/negativity/conflict to ppl. overly polite.
curi: and the asians cultures in general overrate hard work as against smart work.
curi: you see this a ton in education
HeuristicWorld (Andy): Oh that reminds me. How much do you know about these companies that are implementing 4 day workweeks that seem to have increased productivity. Do you think its a good idea?
curi: it depends on the industry
curi: it's a good idea in software
curi: knowledge workers in general can do around 3 hours of hard work per day, on avg.
curi: sustaining more than that leads to burnout
curi: and mistakes in software are expensive. bug fixing can take up a lot of time and effort.
curi: it's better to try seriously to write great code upfront than to have an unnecessarily high bug rate due to tired coders
A new guy joined the Fallible Ideas chatroom on Discord (come join us and ask a question!) and had some questions about Objectivism. Excerpt:
The objectivist ideology is lacking in empathy. <-- this is the claim
How likely is that to be the case?
Objectivism strongly lacks empathy in some cases, and has plenty in others. it depends on the situation and the things at stake. Harris doesn't attempt to define empathy, investigate when it's good or bad, appropriate or inappropriate, and engage with the Objectivist position on the matter or explain his own mainstream position on the matter.
In one of my favorite Rand quotes, she suggests redirecting empathy from some less important causes to another more important cause. The point is a disagreement about which things (smart youths or ducks) are more deserving of empathy, charity, help.
They [young fighters for ideas, rebels against conformity, independent minds seeking the truth] perish gradually, giving up, extinguishing their minds before they have a chance to grasp the nature of the evil they are facing [our irrational culture]. In lonely agony, they go from confident eagerness to bewilderment to indignation to resignation—to obscurity. And while their elders putter about, conserving redwood forests and building sanctuaries for mallard ducks, nobody notices those youths as they drop out of sight one by one, like sparks vanishing in limitless black space; nobody builds sanctuaries for the best of the human species.
Read the full conversation: 12 page PDF
Dagny wrote (edited slightly with permission):
I think I made a mistake in the discussion by talking about more than one thing at once. The problem with saying multiple things is he kept picking some to ignore, even when I asked him repeatedly to address them. See this comment and several comments near it, prior, where I keep asking him to address the same issue. but he wouldn't without the ultimatum that i stop replying. maybe he still won't.
if i never said more than one thing at once, it wouldn't get out of hand like this in the first place. i think.
I replied: I think the structure of conversations is a bigger contributor to the outcome than the content quality is. Maybe a lot bigger.
I followed up with many thoughts about discussion structure, spread over several posts. Here they are:
In other words, improving the conversation structure would have helped with the outcome more than improving the quality of the points you made, explanations you gave, questions you asked, etc. Improving your writing quality or having better arguments doesn't matter all that much compared to structural issues like what your goals are, what his goals are, whether you mutually try to engage in cooperative problem solving as issues come up, who follows whose lead or is there a struggle for control, what methodological rules determine which things are ignorable and which are replied to, and what are the rules for introducing new topics, dropping topics, modifying topics?
it's really hard to control discussion structure. people don't wanna talk about it and don't want you to be in control. they don't wanna just answer your questions, follow your lead, let you control discussion flow. they fight over that. they connect control over the discussion structure with being the authority – like teachers control discussions and students don't.
people often get really hostile, really fast, when it comes to structure stuff. they say you're dodging the issue. and they never have a thought-out discussion methodology to talk about, they have nothing to say. when it comes to the primary topic, they at least have fake or dumb stuff to say, they have some sorta plan or strategy or ideas (or they wouldn't be talking about). but with stuff about how to discuss, they can't discuss it, and don't want to – it leads so much more quickly and effectively to outing them as intellectual frauds. (doesn't matter if that's your intent. they are outed because you're discussing rationality more directly and they have nothing to say and won't do any of the good ideas and don't know how to do the good ideas and can't oppose them either).
sometimes people are OK with discussion methodology stuff like Paths Forward when it's just sounds-good vague general stuff, but the moment you apply it to them they feel controlled. they feel like you are telling them what to do. they feel pressured, like they have to discuss the rational way. so they rebel. even just direct questions are too controlling and higher social status, and people rebel.
some types of discussion structure. these aren’t about controlling the discussion, they are just different ways it can be organized. some are compatible with each other and some aren’t (you can have multiple from the list, but some exclude each other):
i’ve been noticing structure problems in discussions more in the last maybe 5 years. Paths Forward and Overreaching address them. lots of my discussions are very short b/c we get an impasse immediately b/c i try to structure the discussion and they resist.
like i ask them how they will be corrected if they’re wrong (what structural mechanisms of discussion do they use to allow error correction) and that ends the discussion.
or i ask like “if i persuade you of X, will you appreciate it and thank me?” before i argue X. i try to establish the meaning X will have in advance. why bother winning point X if they will just deny it means anything once you get there? a better way to structure discussion is to establish some stakes around X in advance, before it’s determined who is right about X.
i ask things like if they want to discuss to a conclusion, or what their goal is, and they won’t answer and it ends things fast
i ask why they’re here. or i ask if they think they know a lot or if they are trying to learn.
ppl hate all those questions so much. it really triggers the fuck out of them
they just wanna argue the topic – abortion or induction or whatever
asking if they are willing to answer questions or go step by step also pisses ppl off
asking if they will use quotes or bottom post. asking if they will switch forums. ppl very rarely venue switch. it’s really rare they will move from twitter to email, or from email to blog comments, or from blog comments to FI, etc
even asking if they want to lead the discussion and have a plan doesn’t work. it’s not just about me controlling the discussion. if i offer them control – with the caveat that they answer some basic questions about how they will use it and present some kinda halfway reasonable plan – they hate that too. cuz they don’t know how to manage the discussion and don’t want the responsibility or to be questioned about their skill or knowledge of how to do it.
structure/rules/organization for discussion suppresses ppl’s bullshit. it gives them less leeway to evade or rationalize. it makes discussion outcomes clearer. that’s why it’s so important, and so resisted.
the structure or organization of a discussion includes the rules of the game, like whether people should reply more tomorrow or whether it's just a single day affair. the rules for what people consider reasonable ways of ending a discussion are a big deal. is "i went to sleep and then chose not to think about it the next day, or the next, or the next..." a reasonable ending? should people actually make an effort to avoid that ending, e.g. by using software reminders?
should people take notes on the discussion so they remember earlier parts better? should they quote from old parts? should they review/reread old parts?
a common view of discussion is: we debate issue X. i'm on side Y, you're on side Z. and ppl only say stuff for their side. they only try to think about things in a one-sided, biased way. they fudge and round everything in their favor. e.g. if the number is 15, they will say "like 10ish" or "barely over a dozen" if a smaller number helps their side. and the other guy will call it "around 20" or "nearly 18".
a big part of structure is: do sub-plots resolve? say there's 3 things. and you are trying to do one at a time, so you pick one of the 3 and talk about that. can you expect to finish it and get back to the other 2 things, or not? is the discussion branching to new topics faster than topics are being resolved? are topics being resolved at a rate that's significantly different from zero, or is approximately nothing being resolved?
another part of structure is how references/cites/links are used. are ideas repeated or are pointers to ideas used? and do people try to make stuff that is suitable for reuse later (good enough quality, general purpose enough) or not? (a term similar to suitable for reuse is "canonical").
I already knew that structural knowledge is the majority of knowledge. Like a large software project typically has much more knowledge in the organization than the “payload” (aka denotation aka direct purpose). “refactoring" refers to changing only the structure while keeping the function/content/payload/purpose/denotation the same. refactoring is common and widely known to be important. it’s an easy way for people familiar with the field to see that significant effort goes into software knowledge structure cuz that is effort that’s pretty much only going toward structure. software design ideas like DRY and YAGNI are more about structure than content. how changeable software is is a matter of structure ... and most big software projects have a lot more effort put into changes (like bug fixes, maintenance and new features) than into initial development. so initial development should focus more effort on a good structure (to make changes easier) than on the direct content.
it does vary by software type. games are a big exception. most games they have most of their sales near release. most games aren’t updated or changed much after release. games still need pretty good structure though or it’d be too hard to fix enough the bugs during initial development to get it shippable. and they never plan the whole game from the start, they make lots of changes during development (like they try playing it and think it’s not fun enough, or find a particular part works badly, and change stuff to make it better), so structure matters. wherever you have change (including error correction), structure is a big deal. (and there’s plenty of error correction needed in all types of software dev that make substantial stuff. you can get away with very little when you write one line of low-risk code directly into a test-environment console and aren’t even going to reuse it.)
it makes sense that structure related knowledge is the majority of the issue for discussion. i figured that was true in general but hadn’t applied it enough. knowledge structure is hard to talk about b/c i don’t really have people who are competent to discuss it with me. it’s less developed and talked through than some other stuff like Paths Forward or Overreaching. and it’s less clear in my mind than YESNO.
so to make this clearer:
structure is what determines changeability. various types of change are high value in general, including especially error correction. wherever you see change, especially error correction, it will fail without structural knowledge. if it’s working ok, there’s lots of structural knowledge.
it’s like how the capacity to make progress – like being good at learning – is more important than how much you know how or how good something is now. like how a government that can correct mistakes without violence is better than one with fewer mistakes today. (in other words, the structure mistake of needing violence to correct some categories of mistake is a worse mistake than the non-structure mistake of taxing cigarettes and gas. the gas tax doesn’t make it harder to make changes and correct errors, so it’s less bad of a mistake in the long run.)
Intro to knowledge structure (2010):
Original posts after DD told me about it (2003)
The core idea of knowledge structure is that you can do the same task/function/content in different ways. You may think it doesn’t matter as long as the result is (approximately) the same, but the structure matters hugely if you try to change it so it can do something else.
“It” can be software, an object like a hammer, ideas, or processes (like the processes factory workers use). Different software designs are easier to add features to than others. You can imagine some hammer designs being easier to convert into a shovel than others. Some ideas are easier to change than others. Or imagine two essays arguing equally effectively for the same claim, and your task is to edit them to argue for a different conclusion – the ease of that depends on the internal design of the essays. And for processes, for example the more the factory workers have each memorized a single task, and don’t understand anything, the more difficult a lot of changes will be (but not all – you could convert the factory to build something else if you came up with a way to build it with simple, memorizable steps). Also note the ease of change often depends on what you want to change to. Each design makes some sets of potential changes harder or easier.
Back to the ongoing discussion (which FYI is exploratory rather than having a clear conclusion):
“structure” is the word DD used. Is is the right word to use all the time?
I think “design” and “organization” are good words. “Form” can be good contextually.
What about words for the non-structure part?
The lists help clarify the meaning – all the words together are clearer than any particular one.
What does a good design offer besides being easier to change?
Flexibility: solves a wider range of relevant problems (without needing to change it, or with a smaller/easier change). E.g. a car that can drive in the snow or on dry roads, rather than just one or the other.
Easier to understand. Like computer code that’s easier to read due to being organized well.
Made up of somewhat independent parts (components) which you can separate and use individually (or in smaller groups than the original total thing). The parts being smaller and more independent has advantages but also often involves some downsides (like you need more connecting “glue” parts and the attachment of components is less solid).
Easier to reuse for another purpose. (This is related to changeability and to components. Some components can be reused without reusing others.)
Internal reuse (references, pointers, links) rather than new copies. (This is usually but not always better. In general, it means the knowledge is present that two instances are actually the same thing instead of separate. It means there’s knowledge of internal groupings.)
Good structures are set up to do work (in a certain somewhat generic way), and can be told what type of work, what details. Bad structures fail to differentiate what is parochial details and what is general purpose.
The more you treat something as a black box (never take it apart, never worry about the details of how it works, never repair it, just use it for its intended purpose), the less structure matters.
In general, the line between function and design is approximate. What about the time it takes to work, or the energy use, or the amount of waste heat? What are those? You can do the same task (same function) in different ways, which is the core idea of different structures, and get different results for time, energy and heat use. They could be considered to be related to design efficiency. But they could also be seen as part of the task: having to wait too long, or use too much energy, could defeat the purpose of the task. There are functionality requirements in these areas or else it would be considered not to work. People don’t want a car that overheats – that would fail to address the primary problem of getting them from place to place. It affects whether they arrive at their destination at all, not just how the car is organized.
(This reminds me of computer security. Sometimes you can beat security mechanisms by looking at timing. Like imagine a password checking function that checks each letter of the password one by one and stops and rejects the password if a letter is wrong. That will run more slowly based on getting more letters correct at the start. So you can guess the password one letter at a time and find out when you have it right, rather than needing to guess the whole thing at once. This makes it much easier to figure out the password. Measuring power usage or waste heat could work too if you measured precisely enough or the difference in what the computer does varied a large enough amount internally. And note it’s actually really hard to make the computer take exactly the same amount of time, and use exactly the same amount of power, in different cases that have the same output like “bad password”.)
Form and function are related. Sometimes it’s useful to mentally separate them but sometimes it’s not helpful. When you refactor computer code, that’s about as close to purely changing the form as it gets. The point of refactoring is to reorganize things while making sure it still does the same thing as before. But refactoring sometimes makes code run faster, and sometimes that’s a big deal to functionality – e.g. it could increase the frame rate of a game from non-playable to playable.
Some designs actively resist change. E.g. imagine something with an internal robot that goes around repairing any damage (and its programmed to see any deviation or difference as damage – it tries to reverse all change). The human body is kind of like this. It has white blood cells and many other internal repair/defense mechanisms that (imperfectly) prevent various kinds of changes and repair various damage. And a metal hammer resists being changed into a screwdriver; you’d need some powerful tools to reshape it.
The core idea of knowledge structure is that you can do the same task/function/content in different ways. You may think it doesn’t matter as long as the result is (approximately) the same, but the structure matters hugely if you try to change it so it can do something else.
Sometimes programmers make a complicated design in anticipation of possible future changes that never happen (instead it's either no changes, other changes, or just replaced entirely without any reuse).
It's hard to predict in advance which changes will be useful to make. And designs aren't just "better at any and all changes" vs. "worse at any and all changes". Different designs make different categories of changes harder or easier.
So how do you know which structure is good? Rules of thumb from past work, by many people, doing similar kinds of things? Is the software problem – which is well known – just some bad rules of thumb (that have already been identified as bad by the better programmers)?
- Made up of somewhat independent parts (components) which you can separate and use individually (or in smaller groups than the original total thing). The parts being smaller and more independent has advantages but also often involves some downsides (like you need more connecting “glue” parts and the attachment of components is less solid).
this is related to the desire for FI emails to be self-contained (have some independence/autonomy). this isn't threatened by links/cites cuz those are a loose coupling, a loose way to connect to something else.
- Easier to reuse for another purpose. (This is related to changeability and to components. Some components can be reused without reusing others.)
but, as above, there are different ways to reuse something and you don't just optimize all of them at once. you need some way to judge what types of reuse are valuable, which partly seems to depend on having partial foresight about the future.
The more you treat something as a black box (never take it apart, never worry about the details of how it works, never repair it, just use it for its intended purpose), the less structure matters.
sometimes the customer treats something as a black box, but the design still matters a lot for:
In general, the line between function and design is approximate.
like the line between object-discussion and meta-discussion is approximate.
as discussion structure is crucial (whether you talk about it or not), most stuff has more meta-knowledge than object-knowledge. here's an example:
you want to run a small script on your web server. do you just write it and upload? or do you hook it into existing reusable infrastructure to get automatic error emails, process monitoring that'll restart the script if it's not running, automatic deploys of updates, etc?
you hook it into the infrastructure. and that infrastructure has more knowledge in it than the script.
when proceeding wisely, it's rare to create a ton of topic-specific knowledge without the project also using general purpose infrastructure stuff.
Form and function are related.
A lot of the difference between a smartphone and a computer is the shape/size/weight. That makes them fit different use cases. An iPhone and iPad are even more similar, besides size, and it affects what they're used for significantly. And you couldn't just put them in an arbitrary form factor and get the same practical functionality from them.
Discussion and meta-discussion are related too. No one ever entirely skips/omits meta discussion issues. People consider things like: what statements would the other guy consent to hear and what would be unwanted? People have an understanding of that and then don't send porn pics in the middle of a discussion about astronomy. You might complain "but that would be off-topic". But understanding what the topic is, and what would be on-topic or off-topic is knowledge about the discussion, rather than directly being part of the topical discussion. "porn is off topic" is not a statement about astronomy – it is itself meta discussion which is arguably off topic. you need some knowledge about the discussion in order to deal with the discussion reasonably well.
Some designs actively resist change.
memes resist change too. rational and static memes both resist change, but in different ways. one resists change without reasons/arguments, the other resists almost all change.
Discussion and meta-discussion are related too.
House of Sunny podcast. This episode was recommended for Trump and Putin info at http://curi.us/2041-discussion#c10336
This is all meta so far. It’s not the information the show is about (Trump and Putin politics discussion). It’s about the show. It’s telling you what kind of show it’s going to be, and who the host is. That’s just like discussing what kind of discussion you will have and the background of a participant.
The intro also links the show to a reusable show structure that most listeners are familiar with. People now know what type of show it is, and what to expect. I didn’t listen to much of the episode, but for the next few minutes the show does live up to genre expectations.
I consider the intro long, heavy-handed and blatant. But most people are slower and blinder, so maybe it’s OK. I dislike most show intros. Offhand I only remember liking one on YouTube – and he stopped because more fans disliked it than liked it. It’s 15 seconds and I didn’t think it had good info.
KINGmykl intro: https://www.youtube.com/watch?v=TrN5Spr1Q4A
One thing I notice, compared to the Sunny intro, is it doesn’t pretend to have good info. It doesn’t introduce mykl, the show, or the video. (He introduces his videos non-generically after the intro. He routinely asks how your day is going, says his is going great, and quickly outlines the main things that will be in the video cuz there’s frequently multiple separate topics in one video. Telling you the outline of the upcoming discussion is an example of useful meta discussion.)
The Sunny intro is so utterly generic I found it boring the first time I heard it. I’ve heard approximately the same thing before from other shows! I saw the mykl intro dozens of times, and sure I skipped it sometimes but not every time, and I remember it positively. It’s more unique, and I don’t understand it as well (it has some meaning, but the meaning is less clear than in the Sunny intro.) I also found the Sunny intro to scream “me too, I’m trying hard to fit in and do this how you’re supposed to” and the mykl intro doesn’t have that vibe to me. (I could pretty easily be wrong though, maybe they both have a fake, tryhard social climber vibe in different ways. Maybe i’m just not familiar enough with other videos similar to mykl’s and that’s why I don’t notice. I’ve watched lots of gaming video content, but a lot of that was on Twitch so it didn’t have a YouTube intro. I have seen plenty of super bland gamer intros. mykl used to script his videos and he recently did a review of an old video. He pointed out ways he was trying to present himself as knowing what he’s talking about, and found it cringey now. He mentioned he stopped scripting videos a while ago.)
Example 2: Chef Heidi Teaches Hoonmaru to Cook Korean Short Rib
The last three are things after “let’s get started” that still aren’t cooking. Cooking finally starts at 48s in. But after a couple seconds of cooking visuals, hoonmaru answers an offtopic fan question before finally getting some cooking instruction. Then a few seconds later hoonmaru is neglecting his cooking, and Heidi fixes it while he answers more questions. Then hoonmaru says he thinks the food looks great so far but that he didn’t do much. This is not a real cooking lesson, it’s just showing off Heidi’s cooking for the team and entertaining hoonmaru fans with his answers to questions that aren’t really related to overwatch skill.
Tons of effort goes into setting up the video. It’s under 6 minutes and spent 13.5% on the intro. I skipped ahead and they also spend 16 seconds (4.5%) on the ending, for a total of 18% on intro and ending. And there’s also structural stuff in the middle, like saying now they will go cook the veggies while the meat is cooking – that isn’t cooking itself, it’s structuring the video and activities into defined parts to help people understand the content. And they asked hoonmaru what he thought of the meat on the grill (looks good... what a generic question and answer) which was ending content for that section of the video.
off topic, Heidi blatantly treats hoonmaru like a kid. at 4:45 she’s making a dinner plate combining the foods. then she asks if he will make it, and he takes that as an order (but he hadn’t realized in advance he’d be doing it, he just does whatever he’s told without thinking ahead). and then the part that especially treats him like a kid is she says she’s taking away the plate she made so he can’t copy it, he has to try to get the right answer (her answer) on his own, she’s treating it like a school test. then a little later he’s saying his plating sucks and she says “you did a great job, it’s not quite restaurant”. there’s so much disgusting social from both of them.
The goals of Media Matters include:
Serial misinformers and right-wing propagandists inhabiting everything from social media to the highest levels of government will be exposed, discredited.
Internet and social media platforms, like Google and Facebook, will no longer uncritically and without consequence host and enrich fake news sites and propagandists.
Toxic alt-right social media-fueled harassment campaigns that silence dissent and poison our national discourse will be punished and halted.
They don't want there to be neutral platforms where conservatives speak to.
CREW will be the leading nonpartisan ethics watchdog group in a period of crisis with a president and administration that present possible conflicts of interest and ethical problems on an unprecedented scale. CREW will demand ethical conduct from the administration and all parts of government, expose improper influence from powerful interests, and ensure accountability when the administration and others shirk ethical standards, rules, and laws. Here's what success will look like:
- Trump will be afflicted by a steady flow of damaging information, new revelations, and an inability to avoid conflicts issues.
- The Trump administration will be forced to defend illegal conduct in court.
- Powerful industries and interest groups will see their influence wane.
- Dark money will be a political liability in key states.
Amazing they call CREW, one of their groups, "nonpartisan". They publicly present it that way, but they are lying and plan for it to afflict Trump with damaging information. The memo calls CREW one of "our institutions". David Brock, founder of Media Matters, was elected chairman of CREW in 2014. They show CREW as one of their groups and strategize what it will do. They also say:
CREW is a nonpartisan 501(c)(3) organization.
Jeez. And lying that it's nonpartisan is part of its motto:
Read more info on CREW.
So you can see that they don't care about the truth. The memo is full of lies. What do they care about?
Leverage our authority to encourage good journalism.
They want to control what journalists say. They repeatedly call anything agreeing with their partisan, leftists views "good".
MEDIA ADVOCACY-PUNISH ENABLING AND COMPLACENCY
They want to punish journalists who don't comply. In addition to punishing journalists who help the right (e.g. by sharing any of their ideas), they want to punish complacent journalists – that is, journalists who don't actively fight for the left.
GOT FACEBOOK TO COMMIT TO FIGHTING THE RISE OF FAKE NEWS.
During the 2016 election, Facebook refused to do anything about the dangerous rise of fake news or even acknowledge their role in promoting disinformation: Mark Zuckerberg called the notion that fake news is a problem "crazy." In November, we launched a campaign pressuring Facebook to: 1) acknowledge the problem of the proliferation of fake news on Facebook and its consequences for our democracy and 2) commit to taking action to fix the problem. As a result of our push for accountability, Zuckerberg did both. Our campaign was covered by prominent national political, business, and tech media outlets, and we've been engaging with Facebook leadership behind the scenes to share our expertise and offer input on developing meaningful solutions.
They brag about their success controlling the world's information.
$170,000,000 in earned TV airtime for Media Matters research and video since 2013
$311,685,233 value of TV airtime for Bridge-placed research and video since 2011
PUSHED ROGER STONE'S BIGOTRY OFF CABLE NEWS.
DISRUPTED RUPERT MURDOCH'S TIME WARNER EXPANSION.
DROVE UP THE KOCH BROTHERS' NEGATIVES.
They are proud of these things. Meanwhile they complain that a right wing group has 18 million of funding to try to get their message out, and complain that right wingers can talk on Facebook without having billionaire funders or pulling the strings of journalists.
The onslaught of well-funded right-wing media manipulation brings with it significant challenges.
The conservative Media Research Center, with an annual operating budget of $18 million, works closely with establishment right-wing media to reinforce the myth of a liberally biased media, push journalism to the right, and propel misinformation into the mainstream.
That's an onslaught? If you compare funding, the left is way ahead.
DROVE COVERAGE THAT LED TO TRUMP ENTERING OFFICE AS THE LEAST POPULAR PRESIDENT-ELECT IN MODERN HISTORY.
While the dynamics of the election overcame Trump's sky-high negatives, the groundwork we laid will be critical to delegitimizing Trump as president. Bridge drove 673 stories throughout the campaign exposing Trump's unstable temperament, scam-filled business record, history of sexual abuse and misogyny, and racist behavior. As he enters office, he is the most unpopular president-elect in modern history.
They brag about being the puppet master behind 673 anti-Trump media stories. Meanwhile they don't want the Media Research Center to exist and do anything to counter them.
Responsible for more than 40% of the total fines given out by the FEC in 2016 and just about all of the fines levied in 2016 resulting from complaints by good government groups
Besides pulling the strings of journalists, they are working to control government agencies to fine their political opponents.
“Over the years, Media Matters has won or assisted in a number of tangible victories, from getting Glenn Beck off cable news to holding 60 Minutes accountable for its faulty Benghazi reporting.”
They are proud to deplatform their enemies and kill stories they don't like. They want more "tangible victories". At the same time that they brag about how effective they are, they also try to say the right wing is far more influential than people think, and they need to do more to control all sources of information to only promote their favored "progressive" ideas and policies. Meanwhile, they falsely present their organized strategy as coming from independent groups, including CREW which lies that it's nonpartisan and gets favored tax status.
“It’s often easy to trace Media Matters’ influence on a major news story.”
They would hate for any right wing group to have that influence, but they brag about their own influence.
They want to control speech to advance their far-left "progressive" agenda. They are completely driven by trying to win for their side, and will resort to lying, hypocrisy, and anything else they think will work. They think most Americans are stupid and gullible, and that they can fool us. They want to be puppet masters who tell us what to think, indoctrinate our children in school, and impose their vision of society on us all.
This post is highlights from my discussion about liberalism on the Fallible Ideas discord chatroom (it's public, you can join and ask questions or discuss).
curi: liberalism is a principled system. because it's based on principles, it is "extreme". the principles are taken seriously using logic. no exceptions are allowed for the sake of moderation, only due to logical reasoning.
curi: the liberal system of government does not consider basic needs. that is not the principle. hence it doesn't provide food. the logic is totally different.
curi: the purpose of a liberal government is to protect man's rights. that means, essentially, to protect man against violence.
curi: building roads or schools does not protect man's rights, and therefore is not the proper business of government.
curi: the government is special compared to other organizations. it is tasked with controlling the use of violence. because of its involvement with violence, its size and function should be minimized. violence is very dangerous. you don't want the people with the guns doing extra stuff, like being involved with farming, because that gets guns and violence involved with farming unnecessarily.
curi: it would be nice if the government could be funded in a fully voluntary way, but we don't currently know how to do that, so we allow it to raise taxes by force.
curi: this is a very dangerous power – the initiation of force against people who are not persuaded to cooperate voluntarily – so its use must be strictly controlled and minimized.
curi: the necessary condition for a peaceful society and successful government – which is highly desirable so that men can have their rights protected, including against murder and robbery – is the consent of the governed. without voluntary consent from the majority (preferably a large majority), bad things happen. the government has to use force to suppress revolutions, or there will be a revolution. but as long as most people are willing to pay taxes and consent to it, then it can work, and threatening the small minority with violence is unfortunate but at least doesn't destabilize or ruin society.
curi: the situation of living with one's rights protected, and without any limits on your actions other than not violating the rights of others, is called freedom. in short, it's freedom from violence, and freedom to do non-violent actions. in a free society, capitalism is largely implied. what's to stop it? people are free to trade, and to decline trades. people are free to bargain as they will amongst themselves, hire and fire each other, offer goods or services at any prices they wish, etc, etc. people could choose to act in other ways, like giving lots of stuff away without worrying about money prices, so understanding capitalism and its advantages is important, but most current deviations from capitalism could not happen in that situation because they involve rights violations. government intervention in the economy (price controls, tariffs, taxation to fund things other than defending men's rights) and socialism are rights violations – violent attacks on freedom.
curi: in this society, the division of labor is advantageous. it allows economic specialization. instead of us both producing 2 things, we can each produce 1 and trade, and be better at it b/c we have less things to optimize. division of labor also fosters peace (with neighbors, within a country, and internationally) b/c if you use violence against trading partners after you specialize then you dramatically lower your standard of living (b/c you're bad at producing the thing you relied on them to produce).
curi: liberalism is a system where men deal with each other by reason and voluntary cooperation, not violence. you can make offers and appeal to people's reason. interactions only happen for mutual benefit because, given freedom, people will decline offers they don't think benefit them. more or less anything may be accomplished if men are persuaded to do it, but men who are not persuaded are free to live their own life their own way. in this way, people who choose to be involved in projects (like business ventures) take responsibility for the outcomes, and gain the rewards or suffer the losses.
curi: if you want someone to do something or give you something, you can persuade him (including by offering things in return, or by arguing in favor of charity, or by saying what an important use for it you have and why you can't pay, or whatever else) or you can find another option. it's up to him whether to listen, or agree, or not. you don't get to control his life or his property. but if you have a really good idea that requires his property, then the typical thing that happens is: you offer more money for it than it's worth to him, and you both benefit. the reason this works is you have a better use of it than he does. so e.g. you can use it in a way that it helps create $1000/day and he was only getting $500/day value from it, so you can offer to pay him the net present value of an annuity worth $750/day or whatever.
curi: rights violations – violence – are seen as harm and are suppressed. but failure to help someone – lack of benefits – is a completely different category and happens all the time and is totally fine. people have no obligation to help each other, and no right to demand help from people who are not persuaded (with reason, money, or whatever as long as its voluntary) to help.
curi: violence must be suppressed. without that, society will be destroyed by any malcontent or evil bastard or whatever. but lack of benefits must not be suppressed, or else everyone would become slaves to the needy, and it would require massive violence to enforce that.
compSciSooner: I understand what you saying
curi: ok. do you think that your argument about evolution refutes a specific thing i said above so far?
compSciSooner: I think that I would modify it or change tack as I see a difference in values/principles that leads to different perspectives on how society should be structured
curi: ok. so you disagree with some significant part of the above?
compSciSooner: With the starting values. You seem to value "non violence" and minimizing violence is what we should focus on and this society is structured to minimize that
compSciSooner: I would start with something like "non suffering" and would minimize suffering. Suffering being the human experience, 'how to make peoples lives better' would be the motivating question
curi: yes. violence hurts people and also it's antithetical to reason. settling disputes by violence is not a truth-seeking method.
compSciSooner: behind society.
compSciSooner: I agree that we should minimize violence
curi: do you perhaps want to use violence (only with super majority consent, perhaps) to hurt one person to try to help two people (or 200 people, or even one person but to a large enough degree that you think it outweighs the harm of the violence)? is that the point of disagreement?
compSciSooner: If you mean physically hurt, then no. But, for a practical example, taxation is the threat of violence and use of force. But it is necessary and I dont see taxation as harm
curi: i went over taxation above.
curi: there's no principled difference between involuntary taxation and violence. if someone complies for fear of violence, or because you actually punch him, either way you are shutting his mind, reason and judgment out of the equation, and you are making an enemy of him.
compSciSooner: I dont disagree with that but I think it that when that is used to justify less government and/or taxation that it misses societal needs and practicality and how people actually act in the world.
Like I said, these ideas of non violence and government arent new to me. I used to think the model of what you have clarified above would be the model society.
curi: is your disagreement that you're in favor of some violence, not allowed in the liberal system, because you think that violence against a minority will help others sufficiently to be worth it?
compSciSooner: You state the governments purpose should be to minimize violence. I say it should be to increase the general welfare of the people, like that one document says. I think its the declaration of ind
curi: by some metric like utilitarianism or minimizing suffering.
curi: so is that a "yes" to my question?
compSciSooner: Yes, to increase the general welfare, minimize suffering
compSciSooner: That sort of thing
compSciSooner: And when we disagree what the purpose of government is, its hard to debate what the government should do and when, I am sure this is why we were having trouble earlier
curi: ok. there is a liberal idea about this which we haven't mentioned yet. it is the harmony of interests idea. liberalism says that there are no conflicts of interest between men in a liberal society. therefore, whatever is best, if it really is best, and you know it, you can persuade people to participate voluntarily and you don't need violence. because you can tell them why it's better for them! if you cannot persuade them, you should learn more about it, not escalate to violence just when your arguments fail you. you are fallible and may be mistaken. and, besides, there are plenty of other good things to do besides this particular project.
one of the main reasons people want to use violence is they think there's a conflict of interest between e.g. the very rich man the the 500 poor ppl who could benefit greatly from his wealth. they think it's for the greater good to hurt the one person to help the 500, but they do not think that's in the interests of the one person. they see a conflict with no win/win solution possible, whereas liberalism says there is always a win/win solution that could be found (and if you can't find it, then, in your ignorance, leave each other alone so that you don't risk violently imposing your errors on a person who has the truth of it).
curi: this is related also to belief in objective truth, including about morality. there is one truth of the matter about the right way to act in a situation. and these moral truths about how each person should act are compatible, rather than leading to chaos or violence (which wouldn't make much sense as the moral truth). liberalism rejects doctrines like polylogism (that there are different logics or ways of reasoning or truths for different groups, like by class, race, nation, etc)
G Neto: @compSciSooner >Yes, to increase the general welfare, minimize suffering
Why is it good to let an authority decide what will lead to a general welfare? Or decide what that means?
curi: liberals broadly see conflicts as disagreements about what the truth is, stemming from our ignorance and fallibility, and respond to this with attempts at learning and persuasion and, failing that, leaving each other alone and trying to coexist anyway despite disagreeing – which can be accomplished by not violating each other's rights – not using violence (or threat of violence). liberals find it appalling to respond to a disagreement with violence and are unimpressed by excuses for violence like "i think he's judging in bad faith". liberals see the symmetry in violence – either of us could be wrong, and neither of us knows how to address all the misconceptions or questions or doubts or whatever that the other person has.
curi: people often disagree about what the general welfare is, and also disagree that it should ever require sacrifices or win/lose options (rather than win/win options with mutual benefit). shall we have a civil war over it? or shall the majority force their ideas on the minority, with guns instead of books? is that the way to a better world?
curi: and if violence is to be permitted whenever the majority has some excuse, what will society look like? a struggle for power. coalitions seeking to be the majority and use violence to benefit themselves. conflicts everywhere. and no long term security of property for anyone.
curi: bribery and corruption too, of course. once the government has the power to help some groups and harm other groups, the social cooperation is fundamentally at risk. perhaps a broken system can survive anyway with the good will of many citizens who don't want to gain or abuse power, but it's best to put safeguards at every level possible (majority of citizens love peace and the government has carefully limited powers).
curi: one of the major examples of the harmony of men's interests is the harmony between producer and consumer. the self-interested producer will produce what consumers want, so that he can make the most profit. this serves him and, at the same time, serves the consumers. the self-interested profit motive incentivizes men to create what other men want, to serve the preferences of others.
curi: and if you use violence against people you disagree with, what are the safeguards? what if you're mistaken? how do you make it predictable way in advance so people aren't caught off guard and hurt extra? what do you do about people manipulating or trying to control the use of violence? and won't this violence suppress positive outliers, which always start as a minority and have good reasons for what they are doing which other people don't yet understand, and thus it'll violently suppress the best and brightest human beings and the progress they would have brought? and, for what? if you get e.g. 80% of people to agree on something, surely they have enough wealth between them to do it, do you really need to violently take wealth from those who disagree?
curi: liberals think their system does minimize suffering, overall (because there is no better system). but that's a consequence instead of a design principle. minimizing suffering is hard to figure out how to do (and hard to agree about what is suffering and in what amount) and doesn't lend itself well to good system design. liberalism deals with the problem of what to do when people disagree (leave each other alone – which means not using violence), whereas the various "minimize suffering" schools of thought i've seen don't have as clear or good a way to address the problem of disagreements. note that "majority rules" is not the liberal answer to disagreements in general, and liberals fear the "tyranny of the majority" and carefully limit what powers the majority vote has. the majority vote is not seen by liberalism as a guide to truth. (majority opinion is one of the common answers for how to address disagreements about what constitutes minimizing suffering). also majority vote/opinion is unpredictable in advance, so it's unsuitable for doing our best to put violence under objective, predictable limits.
curi: another part of liberalism is equality before the law. no special legal privileges by caste, race, having a grandfather who was important in winning a war, etc. everyone is equal when it comes to the government and the use of violence. laws should not target specific groups for different treatment, let alone individuals. one implication is not using the law to take from a minority group to help another group – that would not be equality before the law, whether or not some people believe it minimizes suffering. (liberals think it would be a major cause of suffering to take that kind of action because it's breaking and harming the system itself that creates peaceful cooperation and social harmony. it creates a totally different kind of society where the government is the enemy of some men, and there are conflicts between rival interest groups, and political battles get nasty because they are about who gets to used violence against who.)
curi: Tangentially, on anarchism: many liberals, including myself, are interested in how to improve the liberal system. From the liberal perspective, it would be better if the government was funded in a fully voluntary way, without the violent collection of taxes from anyone. Further, there are some notable upsides to be gained if one didn't have a government at all. The goal is to refine liberalism, not to replace it with a different system. Without debating whether this can be achieved, I will say I think it has not been achieved, and so I'm not advocating a liberal-anarchist system today. I disagree with those who think they figured liberal anarchism out and it's ready to go.
compSciSooner: I was going to read all your posts previously
compSciSooner: Until I saw you most recent post on anarchism
compSciSooner: The idea that anarchism is something we should want to implement if we could only figure out how
compSciSooner: Is an idea so preposterous, you have lost all credibility as some one who isn't naive or crazy or at least has practical ideas
compSciSooner: For all your research you claim to have done
compSciSooner: You seem to be caught up in ideology and can't/won't accept any ideas/research/facts that contradict with your principled system
compSciSooner: And I hope you eventually become as intellectually honest with yourself as you claim to be
compSciSooner: But until then, bye
compSciSooner: It's like arguing with a communist, the ideas are so obviously preposterous, until said communist realizes how wrong they are, the discussion is pointless
compSciSooner: It is like arguing with a flat earther, I am not going to prove the earth is round, it is, I am not going to prove communism is bad, we should all agree on that, I am not going to argue that anarchism is a terrible idea, it should be obvious!
curi: you're exactly like the ppl who freak out the moment i question global warming
compSciSooner: Like I said, intellectually dishonest. Do you have an education in climate science? No? Then don't tell me what the guys who have PhD's in the subject tell the rest of us is peer-reviewed research
Then he gave permission to post the discussion and use his name (which I had offered to anonymize), then left. Plus the chatroom is labelled as public.
Italics were omitted. Not all messages above were consecutive, but many were. Here's the full log (79 page pdf). The rest is more of a chaotic mess (it was hard to get him to answer questions) instead of a clear explanation of ideas.
Discuss parenting, babies, education, child development, punishments, crying and school. Ask your questions!
People should, as a starting point and first approximation, pursue their self-interest. And society should be set up to give people the freedom to control their own lives, so that they can. People are in the best position to know what they value and how to get it. They are in the best position to help themselves. And if they pursue their self-interest fully correctly, there will be no conflicts with others who are also acting correctly.
It’s much more reasonable if, as a starting point, everyone looks after and takes care of himself. If each person instead took care of his neighbor, it’d be chaotic, uneven and (accidentally) unfair, kinda random who gets taken care of how much, and broadly less efficient. It’d be hard to plan your life and future actions because you wouldn’t have much control over what resources you’d have in the future. And there’d be constant fights, resentments and suspicious that someone put some of his effort into his own self-interest and thereby got a larger share of help for himself, and there’d be fights about the distribution of help to others – some popular people would have thousands who want to help them, while some unpopular people would have no one.
People aren’t omniscient. They make mistakes, live in a society with flawed incentives and memes, etc. So just doing what you think is in your self-interest doesn’t always work well. The first check you should do is: do you think it’s in your self interest to initiate force? If so, seriously fucking reconsider and study up about the harmony of mens’ interests, the advantages of peaceful cooperation, how the claim that capitalism exploits the workers is false, etc. But if the thing you think is in your self interest won’t hurt anyone else or break laws, then you’re only risking yourself and your property, so just put thought into it relative to the importance, irreversibility and risks.
Do not consider the good of others as your starting point. If you do that, you will end up sacrificing yourself in some ways because you aren’t omniscient and significant attention to self-interest is required to do a good job of promoting your self-interest. Instead, start with self-interest and then consider the good of others secondarily. Any time your beliefs about your self-interest appear to clash with the good or self-interest of others, that’s an indication you have a misconception about your or their interests (because if there are no misconceptions then, in a free society, there’s a harmony of men’s interests). So try to understand where the apparent conflict of interests is coming from, and consider some adjustments – maybe there is a way to act which is better for others and a way to make that work for yourself too. Or a way to do almost the same thing but with a slight adjustment to avoid it bothering others. Your number one criterion to keep in mind is: do not sacrifice yourself. If you do, you’re betraying yourself and your life, and harming your ability to help yourself or others in the future. Such sacrifices ruin lives while also broadly reducing the total overall problem solving power of the world (especially effectiveness to act in reality, due to wealth and knowledge). You need to be happy and have your own house in order, and only spare things for others when it’s cheap and easy, not when it’s hard and difficult and significantly impacts your own progress. This is not the attitude expressed by Effective Altruism and others.
Charity should be and is a small fraction of the total economy. Charity is less economically productive, so having a mostly-charity economy would mean there’s a lot less total wealth. It’s better to have a bigger pie (economy, wealth), which is growing (a large amount of effort goes to efficient, productive activities to grow the economy and create more wealth), and which has a reasonably small fraction being used in other ways. A mostly-charity oriented economy would mean a smaller pie with less growth of the pie. Charity is generally short term focused, plus anything productive can be done at a profit so charity is unnecessary (it can still be done in a charitable way, but there’s no need to; I’d advise against doing things that could be done profitably, while passing on the profit, on a big scale, as a large part of the economy).
The more of money flows are related to productivity and profit, the more signal there is about the values and preferences of consumers. The more economic planning there is to decide how wealth is used. Charity is inferior at economic planning because it isn’t able to use the price system, and the profit and loss system, as well as normal commerce.
Humanity makes progress, overall, because people try to use lesser amounts of wealth (as measured in money prices, which are by far the best way to measure the value of some wealth in almost all cases) in pursuit of greater amounts of wealth, but not vice versa. And because some people save wealth – which means accumulating capital, which can be used to raise the productivity of labor (thus beginning a virtuous cycle in which the more productive labor creates wealth at a higher rate, allowing for even more saving, allowing for even more productivity increases. Note that scientific research is one of the ways that accumulated wealth gets turned into higher productivity of labor, it’s not just about building factories and tools.). Charity deviates from this system, largely in order to help with short term problems (because in the long term this system creates the best overall situation, the most wealth, the biggest pie, and so is best for everyone). It’s fine to spend a little on charity to help people who fall through the cracks (though there’s a lot of pressure to be more charitable to some of the worst people, not just to help a few good people who got unlucky, nor to help some great people who find that, as a (positive) outlier, they don’t fit into society quite right, so they have some difficulties.) But charity shouldn’t be a major priority, it’s not how things get better in the future. What makes things better in the future is, broadly, when people pursue their own self-interest efficiently (which generally includes valuing their own future, and the future of their children, and even, sure, humanity’s future – most people need not and do not narrowly value only themselves right now). If everyone keeps making their lives better, and interacts with others only for mutual benefit, then things will keep getting better. It’s dangerous when there are interactions without mutual benefit – then there’s the potential for loss, sacrifice, force, hatred, lying, war.
So, in broad strokes, I think charity should be under 10% of the overall economy. Maybe under 1%, but it depends how you count economy size. For each piece of consumer spending, there are many business-to-business transactions that go into that production. GDP or total consumer spending are poor measures of economy size.
These thoughts are related to this discussion.
Thank you Ludwig von Mises (books), Ayn Rand (books), and David Deutsch (discussions) for helping me understand these things.
See also my first comment below.
Sam Harris wrote an article against economic freedom. Every sentence is nasty. I reply to a few:
How Rich is Too Rich?
The title is a leading question. It's asking for an answer like 3 million, 50 million, or a billion dollars. It's assuming there is an amount of wealth that's too rich, and the issue is just to decide where the line is. But that premise is incorrect. There is no "too rich". Wealth is a good thing. More wealth isn't bad.
Also, in our culture, the title will be understood to refer to individual wealth and maybe company wealth but not government wealth, university wealth or non-profit foundation wealth.
[Hearst Castle Photo, at the top]
The uncaptioned photo is misleading. The article opens by talking about wealth inequality and rich individuals. But that's a photo of a government owned tourist attraction, not a private residence. It's not a picture of wealth inequality.
I’ve written before about the crisis of inequality in the United States and about the quasi-religious abhorrence of “wealth redistribution” that causes many Americans to oppose tax increases, even on the ultra rich.
Ludwig von Mises and many other economists and political philosophers have written arguments against wealth redistribution and related concepts like socialism, statism, interventionism, initiating force, central planning, and the erosion of property rights. Rather than address these arguments, Harris just incorrectly implies they're a matter of religious faith.
The conviction that taxation is intrinsically evil has achieved a sadomasochistic fervor in conservative circles—producing the Tea Party, their Republican zombies, and increasingly terrifying failures of governance.
"intrinsically evil" is a straw man. "sadomasochistic fervor" is an insult. "Tea Party" is brought up negatively, without specifying anything negative about it. "Republican zombies" is an insult. The assertion that failures of governance are due to taxes being too low is false and unargued. The intensifier "increasingly terrifying" is aggressive, emotional rhetoric, without facts or reasoning provided.
We've now made it through the first paragraph of the article. I'll speed up for the rest.
Of course, this is just an economic cartoon.
After more insults and straw men, but no economic arguments, Harris declares that people who disagree with him are cartoon idiots. He follows up with wild uncited assertions. E.g. he thinks capitalism is at fault for the 2008 financial crisis, but he doesn't engage with the many books explaining why that's incorrect.
If you are an economist and believe that you have detected any erroneous assumptions above, please write to me here.
As I write this, the linked contact form doesn't exist. Also, this is dishonest because many economists have published detailed explanations of why the things Harris is saying are false. He's just ignoring them as if they don't exist, rather than trying to respond to any.
The federal government should levy a one-time wealth tax (perhaps 10 percent for estates above $10 million, rising to 50 percent for estates above $1 billion) and use these assets to fund an infrastructure bank.
This is a proposal for using physical force on a huge scale. Harris wants to forcibly take "a few trillion dollars" for projects he considers wise, including environmentalism. He doesn't understand liberal ideas like the advantages of dealing with people on a voluntary basis, using persuasion instead of force, or only interacting in a win/win way (when all parties think they're better off by proceeding).
Also, I don't think Harris thought through the practical details of his plan. Why does he think most or all multi-billionaires have ~50% or more of their wealth in liquid assets? And what happens if they don't? They have to take huge losses selling off non-liquid assets?
And stocks won't be liquid enough in the context of all the rich people trying to unload a bunch of stocks. The market would crash.
Consider if e.g. Jeff Bezos had to dramatically reduce the amount of wealth he has invested in Amazon. Here's four basic possibilities:
Contrary to many readers’ assumptions, I am not recommending that the federal government confiscate productive capital from the rich to subsidize the shiftlessness of people who do not want to work.
But he is advocating that the federal government confiscate productive capital from the rich. It's just for a different intended purpose.
to the eye of this non-economist, it seems obvious
Why doesn't he try reading some economics books to find out about what he's missing? The answers seem obvious because he's arrogant, despite knowing he's ignorant of the field.
Yes, I share everyone’s fear that our government, riven by political partisanship and special interests, is often incapable of spending money wisely. But that doesn’t mean a structure couldn’t be put in place to prevent poor uses of these funds.
Harris doesn't propose any structure that would prevent poor use of the funds, nor does he acknowledge that this is a hard problem which people have been trying to solve for centuries without much success. Putting in place a structure to make government more effective is not a new idea, but Harris treats it like an answer even though he apparently hasn't thought of a structure that would work (nor can Harris point to a structure that would work that anyone else has thought of).
The article is hateful throughout, advocates massive use of force (taking trillions of dollars from its owners who give up their property rights only because they don't want to be shot, jailed, or similar) and doesn't even try to engage with the economics literature or even a fair version of what Republicans think. Harris wrote a bunch of biased insults against large groups of mainstream Americans, but didn't contribute a single topical, relevant argument to the current debate about wealth inequality.
Harris also wrote a followup article unreasonably claiming that what his critics objected to was "suggesting that taxes should be raised on billionaires". He then contradicted that by admitting, "Many readers were enraged that I could support taxation in any form." But what about how Harris insulted all Republicans as zombies? And the overall message was his hatred for all people who favor liberal ideas like economic freedom, peace, or property rights?
Related Post: Criticism of Sam Harris' The Moral Landscape
Commentary on The Moral Landscape: How Science Can Determine Human Values by Sam Harris.
... How could we ever say, as a matter of scientific fact, that one way of life is better, or more moral, than another? ...
I will argue, however, that questions about values—about meaning, morality, and life’s larger purpose—are really questions about the well-being of conscious creatures. Values, therefore, translate into facts that can be scientifically understood: regarding positive and negative social emotions, retributive impulses, the effects of specific laws and social institutions on human relationships, the neurophysiology of happiness and suffering, etc.
This is incorrect because well-being is itself value-dependent: what a person values affects which physical states constitute good/high well-being for that person. Studying these scientific facts – just like studying economics – helps people figure out what to value by helping inform them about the consequences of various choices and actions, but it can't directly tell them what to value or what goals to have in life. That requires moral philosophy. Omitting moral philosophy leaves no way to connect facts with values.
One plan could be to claim moral philosophy as a part of science (because the laws of physics determine the laws of computation which determine the laws of epistemology and the foundations of moral philosophy may be from epistemology). But that's not what Harris is saying. He thinks he can directly connect facts to values.
Also, even if something can be studied scientifically via a lengthy chain of relevancies, that doesn't mean that's the best way to study it. Science and reason aren't equivalents, one can do rational thinking outside of science. For moral philosophy, you'll learn more if you think about it rationally and directly than if you try to figure it out via the scientific study of physics (which would be a reductionist approach).
Cancer in the highlands of New Guinea is still cancer; cholera is still cholera; schizophrenia is still schizophrenia;
Harris doesn't understand schizophrenia. Schizophrenia is not a disease like cancer or cholera, for it's a social judgment that cannot be detected at autopsy or by other scientific methods.
Either it's intentional, off-topic activism, or Harris is so ignorant of the issue that he chose this example while trying to choose an uncontroversial example. Both of those possibilities are bad.
And if there are important cultural differences in how people flourish—if, for instance, there are incompatible but equivalent ways to raise happy, intelligent, and creative children—these differences are also facts that must depend upon the organization of the human brain.
This misses the point. There are cultural differences in how people judge flourishing, in which life outcomes they value.
Also, lots of cultural differences are due to context, not value differences nor brain differences. E.g. there is more flourishing-via-camel-breeding in areas where camels live, and kids riding on camels is a larger part of good parenting in those areas.
In principle, therefore, we can account for the ways in which culture defines us within the context of neuroscience and psychology.
But the presence of camels in the area affects culture – and how many people it defines as camel breeders – but isn't neuroscience or psychology.
While the argument I make in this book is bound to be controversial, it rests on a very simple premise: human well-being entirely depends on events in the world and on states of the human brain.
That's literally true because the states of the human brain include value judgments, too, not just the kinds of things mentioned above like being happy or having a retributive impulse. But that doesn't mean that studying brains is the best way to learn about moral philosophy, just because there's a connection doesn't mean one should take the indirect route. It's good to be aware the indirect route exists because it may be relevant to some arguments, but there's nothing wrong with the direct route (I think Harris believes there is something wrong with doing moral philosophy directly, which is why he prefers this more indirect way of trying to approach moral issues.)
Earlier I quoted Harris:
Values, therefore, translate into facts that can be scientifically understood: regarding positive and negative social emotions, retributive impulses, the effects of specific laws and social institutions on human relationships, the neurophysiology of happiness and suffering, etc.
Did he really mean something more like the following?
Values, therefore, translate into facts that can be scientifically understood because all human ideas, including about values, are information which is recorded in the brain (in the same way that computers store information on disks), so the brain is a physical medium containing information about human values – just like a book on moral philosophy is also a physical medium containing information about physical values (which therefore, being physical, can be studied by science).
I don't think he meant that. Studying the human brain because it physically contains information about values – just like a book – doesn't appear to be the project Harris has in mind. So I think we disagree. I think Harris is incorrectly trying to claim his value judgments about certain emotions and psychology states as a part of science, not saying that value judgments are recorded as physical information in brains (which is also true of books, which I think he views differently than brains).
A more detailed understanding of these truths will force us to draw clear distinctions between different ways of living in society with one another, judging some to be better or worse, more or less true to the facts, and more or less ethical.
I think it's disgusting and revealing that Harris wants to use the authority of science to "force" people to think in certain ways, rather than to persuade them to.
There are, for instance, twenty-one U.S. states that still allow corporal punishment in their schools. These are places where it is actually legal for a teacher to beat a child with a wooden board hard enough to raise large bruises and even to break the skin.
This is factually false (in 2010 when the book came out). The part I object to is about raising large bruises and even breaking the skin. Some of the 21 states referred to do not legally allow that. Here's an example of contradicting information, from Time:
In Texas, corporal punishment becomes child abuse when it “results in substantial harm to a child.” As a practical matter in Texas, that means a physical injury that leaves a mark, like bleeding and bruising...
... In Maine, for instance, corporal punishment is lawful if it results “in no more than transient discomfort or minor temporary marks.” Georgia simply forbids any “physical injury,” but here again, what that means is largely at the discretion of judges and prosecutors.
What is Harris doing by including factually false information in his book? What's going on? One of this themes is scientific rigor (which he's bad at in his own scientific papers), but he's not being rigorous in his claims. Either research corporal punishment adequately or don't write about it.
In fact, all the research indicates that corporal punishment is a disastrous practice, leading to more violence and social pathology—and, perversely, to greater support for corporal punishment.
As much as I despise corporal punishment, I don't trust Harris' claim about the state of scientific research. So I checked the cite and it's just one long paper which criticizes corporal punishment in US schools. That can't be adequate for Harris' claim about "all" the research because it's not a survey of every piece of research in the field. It's not a survey at all, and doesn't tell us what even 20% of the research says, so reading Harris' "all" as an exaggeration of "most of" won't fix this error. Harris is trying to deny that people disagree with him, which is false and nasty (you should refute opponents, not deny their arguments exist). He does this by citing a paper that argues for his position but doesn't actually try to survey what everyone else is saying.
Further, the research doesn't indicate it's a "disastrous" practice because what is a disaster is a value judgment, which is outside the scope of any current empirical research (and this isn't even brain research, which is the type of research Harris thinks can tell us about morality). You can research how wounded students are in practice, or the severity of wounds permitted by law, but that kind of research can't tell you what wounds or lack of wounds would be a disaster or otherwise deviate from the moral or good life.
Papers like this often include value judgments which aren't labelled appropriately. It's common to either include philosophy arguments in papers as if they were part of science, or to sneak in philosophy conclusions without arguing them. E.g. this paper says "Fortunately, the practice of government-executed corporal punishment has been declared unconstitutional." But what is fortunate is a value judgment which the research doesn't determine (the research is relevant information to help us make this value judgment, but that's different than the research itself being able to conclude that this is fortunate in the way a physics paper can reach a conclusion about gravity.)
Similarly, the paper says, "A wealth of scientific research demonstrates that corporal punishment of children damages them cognitively, motivationally, physically, psychologically, and emotionally." No it doesn't because "damages" is a value judgment – parents differ regarding what kind of child they want to have. I think there's a truth of the matter and some parents are mistaken, but my knowledge of that comes from rational argument, not from scientific research. Regardless of what future brain research may reveal, today's corporal punishment research is not capable of telling us what science says we should value, it only aids us in choosing our values by helping us better understand the consequences of actions.
The "research" paper concludes with blatant political activism, not science:
The responsibility to create a kinder, gentler society resides with many people, including parents. But the government is uniquely positioned and particularly responsible for synthesizing scientific and other data to produce sound public policy. When state governments fail to recognize the unreasonableness of their own policies, it is incumbent upon the federal courts to uphold the Constitution in challenges to the government action. But the federal judiciary has been asleep at the wheel for more than thirty years when it comes to protecting children from beatings by state actors. The ultimate responsibility to safeguard citizens from liberty deprivations lies with the Supreme Court, but it, too, has chosen to ignore the plight of schoolchildren. The judiciary should act on this issue immediately and declare school corporal punishment unconstitutional. Until then, relatively innocent, quintessentially powerless, and strikingly black Americans will continue to pay the immediate price with incalculable ultimate social costs.
Agree or disagree, that's not empirical science. My view: I broadly agree that violence against children is bad, and I've proposed a guideline for parents: never do anything to your child that would be a crime to do to your neighbor. But I disagree with the author's perspective on government, which I want to be more limited. I think the government should stay out of science, parenting and education. (I have logical arguments regarding these beliefs, which we could discuss in the comments below, but I don't claim they are the outcome of scientific research.)
I think the example about corporal punishment is representative of how Harris (and many other authors) incorrectly use research, facts and cites.
And so it is obvious that before we can make any progress toward a science of morality, we will have to clear some philosophical brush. In this chapter, I attempt to do this within the limits of what I imagine to be most readers’ tolerance for such projects. Those who leave this section with their doubts intact are encouraged to consult the endnotes.
Harris is hostile to philosophy. That's notable because the book consists almost entirely of philosophy (or at least non-science, like politics, which is a sub-field of philosophy that we often don't call philosophy, and which requires philosopihcal methods to think about well). This is typical: people study science and then do philosophy, but don't do it very well because they haven't studied philosophy adequately (often because they dislike philosophy and don't think it's valuable, which is often because most philosophy is bad – but people's philosophical intuitions, learned in childhood, aren't very good either and it's necessary to find or create good ideas about how to reason).
But this notion of “ought” is an artificial and needlessly confusing way to think about moral choice. In fact, it seems to be another dismal product of Abrahamic religion—which, strangely enough, now constrains the thinking of even atheists. If this notion of “ought” means anything we can possibly care about, it must translate into a concern about the actual or potential experience of conscious beings (either in this life or in some other). For instance, to say that we ought to treat children with kindness seems identical to saying that everyone will tend to be better off if we do.
But what constitutes being "better off" depends on what you want, so this does nothing to address the is/ought problem – it just moves the problem from "ought" to "better".
The person who claims that he does not want to be better off is either wrong about what he does, in fact, want (i.e., he doesn’t know what he’s missing), or he is lying, or he is not making sense.
Right, because "better off" means "better off according to your own values", so it's best for you no matter what you value. But this doesn't address the is/ought problem or the problem of determining what to value.
Imagine if there were only two people living on earth: we can call them Adam and Eve. Clearly, we can ask how these two people might maximize their well-being. Are there wrong answers to this question? Of course. (Wrong answer number 1: smash each other in the face with a large rock.)
Harris is appealing to widespread moral intuitions and common values in our culture, not actually scientifically establishing anything about moralitty. He just thinks it's obvious (which is what the phrase "of course" means), but appeal to obviousness isn't a method of science (it's a mistaken method of philosophy).
while there are ways for their personal interests to be in conflict, most solutions to the problem of how two people can thrive on earth will not be zero-sum. Surely the best solutions will not be zero-sum.
I believe this (the non-existence of conflicts of interest) because of non-scientific arguments put forward by liberal political philosophers like Ayn Rand and Ludwig von Mises. But Harris is just saying things like "surely" instead of relating it to scientific facts, so it's a bad argument.
While this leaves the question of what constitutes well-being genuinely open, there is every reason to think that this question has a finite range of answers.
No, logically there are an infinite range of answers to that question. I have no idea how Harris decided it's a finite range. For example, one could value there being exactly 3 paperclips in the whole universe. Or 4. Or 5. So you can see that, as a logical matter, there are infinite potential answers. Most of the logically possible answers are dumb, but dumbness is a matter for fallible, rational, critical discussion.
Let me simply concede that if you don’t see a distinction between these two lives [descriptions of lives that almost everyone in our culture, including Harris, considers especially good and bad] that is worth valuing (premise 1 above), there may be nothing I can say that will attract you to my view of the moral landscape.
Basically, Harris is admitting he lacks arguments about his main thesis. If you don't already agree with him about some of the main issues, he doesn't know what to do. He doesn't have a logical way to connect values to science, he needs you to share existing intuitions about morality with him.
Personally, I agree with him about the distinction (I disagree with the altruistic attitude, but it's still way better than rape, violence, and being hunted through a jungle by would-be murderers). I do believe that my view is rationally defensible, but I do not believe that my view of this matter is a part of science.
Harris, by contrasts, seems to think his view is not rationally defensible in full, because he thinks there may be "nothing" that he could say to persuade someone who doesn't already agree with parts of it.
It can be useful to say, "Here are arguments for conclusion C that use P as a premise, so if you already agree with me about P then I think you should agree with me about C too." But the book doesn't present itself as merely doing that – as building some additional moral ideas on top of common, existing moral ideas. Harris claims to be able to put morality on a scientific footing and otherwise deal with fundamental and foundational issues. But his book openly concedes it can't do that.
Science simply represents our best effort to understand what is going on in this universe, and the boundary between it and the rest of rational thought cannot always be drawn. There are many tools one must get in hand to think scientifically—ideas about cause and effect, respect for evidence and logical coherence, a dash of curiosity and intellectual honesty, the inclination to make falsifiable predictions, etc.—and these must be put to use long before one starts worrying about mathematical models or specific data.
The book seems to argue that there is a connection between empirical science – like brain scans – and values. But then Harris says actually he doesn't have any clear definition of science. If one is willing to include "respect for evidence" within the domain of science, then of course science can tell you about values – it can tell you to respect evidence (respect is a value judgment). Similarly, honesty and curiosity are moral issues. But for some reason Harris doesn't conclude, "Morality precedes science and moral values are needed before you can do science successfully, so trying to scientifically establish moral values is pointless." (To give credit: the need for moral values before you can do science was told to me by David Deutsch, years before The Moral Landscape was written.)
Broadly, if you think rational philosophy is a part of science, because you think science refers to all our best efforts to at rational understanding, then of course moral philosophy (being a field of philosophy) is part of science. But that's bad terminology (our culture usefully distinguishes physicists from reason-oriented philosophers), and it's not actually Harris' point.
The book is sloppy, and the thesis is misconceived because of Harris' mistaken attitudes towards science. It's unnecessary to claim everything as part of science. Reason isn't limited to empirical matters. He should study epistemology and understand reason correctly, rather than trying to use science as his only rational tool.
Everything about human beings physically exists, so technically physics research (including its sub-fields) can investigate any aspect of human beings. Further, human brains are computers which operate according to the laws of physics (which determine the laws of computation), and so physics is relevant. But that isn't Harris' thesis. And even granting all this, science wouldn't simply determine values on its own, and supercede philosophy, because we need epistemology in order to judge which science and applications of science are correct. (What I think is that science is relevant in many ways to thinking about morality – it's useful – but not that science can determine morality.)
Harris doesn't know how to scientifically determine which physical states of human beings to value and consider to constitute "well-being". He thinks that brain scans will help with this, but such scans can never tell us that the brain scan results we label "happiness" are scientifically good things (the "happiness" label is not science, it's not an observed fact, it's philosophy and value judgment which is open to rational discussion).
And science isn't the best way to learn about people and their actions or values, even if it would work. For more explanation, see the criticism of reductionism in chapter 1 of The Fabric of Reality by David Deutsch, or ask a question in the comments below. And as Popper explained, we can start anywhere – conjecture anything in any field – and approach it critically. We don't have to focus on building up to the ideas we're interested in starting from foundations that are difficult to argue within our current culture. We can just learn about morality directly with guesses and criticism – but Harris doesn't know that, so his book isn't very good. For example Harris writes:
Many of these people also claim that a scientific foundation for morality would serve no purpose in any case. They think we can combat human evil all the while knowing that our notions of “good” and “evil” are completely unwarranted.
Harris thinks that if you don't have scientific arguments, your conclusions are "unwarranted". This is a major error which is corrected by Critical Rationalism. Harris' problem is he has no idea how to defend reason itself without using science, not just when it comes to moral values but also for anything else (e.g. politics, economics, logic, math or epistemology). There are many valuable areas of human knowledge which are predominantly not understood in a scientific way, but which are rational nonetheless. Reason is actually about error correction, not about empiricism nor about using justifying authorities like science. Authority is actually the arch-enemy of reason, so Harris' book is actually, by accident, an extended attack on reason, because the essence of his project is about justifying moral claims (all justifications are appeals to authority, sometimes in disguise) rather than about thinking critically to try to correct errors and thereby improve our ideas. (In fairness, he's not alone here, and he's not unusually bad. These kinds of mistakes are common when one doesn't understand Critical Rationalism adequately, and we live in a world where fewer than 100 living people have that knowledge. What I dislike is the lack of Paths Forward – ways for Harris' mistakes to be corrected.)
PS I have not read the whole book. If I missed a part which addresses one of my criticisms, please let me know in the comments below and provide a quote.
Related Post: Sam Harris vs. Capitalism
This post criticizes The Neural Correlates of Religious and Nonreligious Belief by Sam Harris , Jonas T. Kaplan , Ashley Curiel, Susan Y. Bookheimer, Marco Iacoboni, Mark S. Cohen in 2009. I wrote this as a reply on the Change My View subreddit, and made minor edits so it'd stand alone.
Once we had two groups of subjects (Christians and Nonbelievers)
Specific criteria used are not given, making this research non-reproducible. This especially concerns me because such criteria are controversial and I would expect to disagree with the study authors about some categorizations regarding which persons think about which topics in religious ways (I don't think that religious thinking is all or nothing).
Later, they admit the screening criteria were poor, and make excuses. They later admit, "the failure of our brief screening procedure to accurately assess a person's religious beliefs".
To this end we assessed subjects' general intelligence using the Weschler Abbreviated Scale of Intelligence (WASI)
It's spelled "Wechsler".
IQ tests have many problems. Here is a previous discussion where I pointed out some of the problems. http://curi.us/2056-iq
screened for psychopathology using the Brief Psychiatric Rating Scale (BPRS)
Their non-random screening, including this, dropped 44% of people. That's getting far from a representative sample of the population! And those are only of the people who met the first 5 criteria that already included two related to psychiatric issues.
There are lots of problems with psychiatric screenings. I'm not going to go into it in detail here, but see these books criticizing psychiatry: http://fallibleideas.com/books#szasz
Forty of these participated in the fMRI portion of our study, but ten were later dropped, and their data excluded from subsequent analysis, due to technical difficulties with their scans (2 subjects), or to achieve a gender balance between the two groups (1 subject), or because their responses to our experimental stimuli indicated that they did not actually meet the criteria for inclusion in our study as either nonbelievers or committed Christians (7 subjects).
Dropping those 7 people is a big problem. They were removed because their data didn't fit the expected answer patterns. IMO that should have been a learning opportunity to reconsider mistaken expectations.
Here are example stimuli from the experiment. I didn't read them all, but it looked to me like over half the groups of 4 stimuli had a flaw. Also there's a systematic bias: the Christian truths are more often non-hedged statements, while the atheists truths are more often hedged. E.g. in 19 you read "The Bible" in the Christian one and "Most of the Bible" in the atheist one, and 29 has Jesus either "literally" rising from the dead or "probably" not rising from the dead.
The Bible is free from error.
This is categorized as something that all the Christian participants should consider true. But many serious Christians do not believe this.
The Bible is free from significant error.
It's weird that they have two very similar questions.
All books provided perfectly accurate accounts of history.
The Bible is full of fictional stories and contains historical errors.
This is categorized as something that all the Christian participants should consider false. But many serious Christians do believe this.
People who believe in the biblical God often do so on very good evidence.
This is categorized as something that all the Christian participants should consider true. But many serious Christians do not believe this.
It reasonable to believe in an omniscient God.
Jesus Christ can’t do anything to help humanity in the 21st century.
This is supposed to be considered true by non-believers, but many non-believers (including me) consider this statement false. (Though it's vague: do they mean Jesus Christ literally and personally can help people today, or his teachings can help? I'd change my answer depending on that. It's his teachings that I think can do "anything" (more than zero) to help.)
In general, they shouldn't have used words like "anything", "all", most", "greatest" because people routinely misread those statements (misreading e.g. "all" as "most", or vice versa). And those kind of statements are so often written incorrectly and carelessly that readers, reasonably, don't expect reliable, literal precision from them.
Jesus was literally born of a virgin.
Lots of Christians don't believe this – possibly because they are more educated about their religion (not less). "Virgin" (in the sense of not having sex) is a mistranslation – he was born of a young women (which, btw, is a typical meaning of "virgin" in English).
The Biblical story of creation is basically true.
Tons of Christians aren't young Earth creationists.
Most of the Bible is inferior to modern thinking on morality and human happiness.
This is supposed to be considered true by atheists, but as an atheist I consider it vague (which modern thinking?). If I try to read it using guesses about what the author of the statement meant, I think I disagree with it. Also if I read it with a "most" before "modern thinking", then I'd judge it false.
The Biblical story of creation is purely a myth.
This is supposed to be an atheist truth, but as an atheist I consider it false (due to "purely", which I mentioned above is the kind of word they shouldn't have used because people vary in how literally they read it). It's also problematic because I think many atheists aren't adequately familiar with the Biblical story of creation.
The Christian doctrine of the Trinity is almost surely fictional.
Many atheists couldn't say what what the Trinity doctrine is. And many atheists, including me, would disagree with this due to the "almost surely fictional". I consider it fictional and would not want to hedge in that way. If the words "almost surely" were deleted then I'd agree with the statement, but I'm not comfortable with this statement as written. There were lots of statements that were supposed to be things I would agree with, but which included hedges I don't believe.
Human beings have complete control over the environment and can grow food anywhere.
This is vague. They consider it false. But we can grow food in airplanes, submarines or spaceships. Where, exactly, can't we grow food? In the middle of active volcanos? In the middle of the sun? Did they expect me to worry about suns or black holes because of taking "anywhere" literally?
The greatest human accomplishments have had nothing to do with God.
This is one of the worst ones. This is meant to be considered true by atheists. But, historically, most human accomplishments (great and small) were accomplished by religious people who often did think God was relevant (or the gods in the case of polytheists like many Greeks). Example: https://en.wikipedia.org/wiki/Religious_views_of_Isaac_Newton
It is wise to create a government that can help protect its citizens from harm.
I'm confused about why this is meant to be false for everyone. Most people agree with this, right?
Also, for 54 and 55 they accidentally swapped the Christian and Atheist truths. Since they have things categorized incorrectly and make grammar and spelling errors in what they published, I'm concerned that these 4 statements were miscategorized in the actual study.
I could go on and on. There are tons of stimuli with these kinds of problems. This is not up to the high standards required for scientific progress. And they actually excluded 7 people for not answering the questions reliably enough (over 90%) in the way the study authors expected them to answer based on the poor phone screening. And, overall, it looked to me like a lot of highly religious Christians would agree with well under 90% of the Christian truth stimuli, so I think the experimental design is bad. The researchers seem to think that e.g. if you believe in evolution you aren't a serious, religious Christian, which is incorrect. Note they failed at their own design goal that:
All statements were designed to be judged easily as “true” or “false”
Anyway, I'm not even trying to be comprehensive with the issues. There are just a lot of issues. And cites to a ton more issues, e.g. I could go through "The role of the extrapersonal brain systems in religious activity" and point out flaws with that (it's the cite on some text I particularly disagreed with). For now, I'll continue with some comments on the brain scanning aspect since I didn't get to that yet.
For both groups, and in both categories of stimuli, belief (judgments of “true” vs judgments of “false”) was associated with greater signal in the ventromedial prefrontal cortex
This is like measuring magnitudes of electric signals in different regions of CPUs while running different software. That would be a bad way to understand CPUs or software.
Actually, overall, the brain scanning stuff is hard to criticize due to the lack of substantial claims. They need to conclude something significant for me to point out how the evidence is inadequate for the conclusion. But they didn't. Big picture, the paper says more like "We did something and here's the data we got" which is true as far as it goes. They were looking for correlations and found a couple. Finding correlations is quite different than understanding and making claims about how people think. The world is packed full of non-causal correlations. Due to the lack of major claims about the brain scan correlations meaning anything, I'm done. It has quality issues and doesn't reach important conclusions.
The FI discussion group is moving to Google Groups due to unreliable email delivery with Yahoo Groups. The Yahoo Group will remain as a backup group (like the Google Group used to be).