Do You Actually Want to Make Progress?

I’ve written a lot to help people make progress. The basic premise is a reader who wants to make progress but runs into some problems. He fails to solve some of the problems. He gets stuck. He could, therefore, be helped with advice, with new approaches to try, by learning new skills that could improve his problem solving abilities, and with other knowledge.

But even the self-selected few who read my work generally do not seem to want to make progress. I think that is what stops them. They aren’t blocked by obstacles. They are blocked by not trying, not caring, not doing much. (It’s not that they are opposed to making progress, either. They merely don’t actively, positively want to. They’re approximately neutral on the matter.)

You could say that not valuing progress much is a problem. But it’s a different sort of problem than, say, being unable to calculate a mathematical expression, being ignorant of what sentences mean, making logical errors, or being biased.

People have those other sorts of problems. They may be bad at grammar and logic, and that may prevent them from e.g. productively debating philosophy or even from productively reading Popper. But I don’t think they have those problems so badly that they’re stuck with no way to make progress. They could work on things step by step. Often they don’t want to work on more basic skills; they want to work on advanced stuff but skipping steps doesn’t work so they get stuck.

Apparently, a lot of people value engaging with clever, impressive stuff. They want to work on complex philosophy not arithmetic. They want to talk about science not what paragraphs mean. In that case, they don’t really value progress itself in a way that would motivate them to work on prerequisites and develop their skills. They value some of the results they think progress would give them. Or maybe they think skill building would take too long, so they give up and look for shortcuts (which then don’t work).

A theme in Atlas Shrugged by Ayn Rand is that most people don’t value their lives, don’t really want to live, and don’t really try to make progress. (Nor do most people particularly want to die. They are bad at wanting things, valuing things or having goals.)

People who say they are ambitious, and claim to care deeply about making progress, are usually lying to themselves. If we talk, that results in them repeating the lies to me. Lying to yourself leads to lying to others. So I cannot trust people who say they want to make progress.

So I’m left considering options like:

  • Only talk to people who already have done a lot, e.g. written 100,000 words of philosophy that didn’t suck (prior to that, they’re welcome to read my stuff, and perhaps even have very brief exchanges with me on my forum, but no serious or lengthy conversations with me).
  • Write for people who want to make progress, should any exist, and not worry about the rest.
  • Have more ways to test people, more ways they can differentiate themselves.
  • Let them figure out for themselves some ways to differentiate themselves, if they can.
  • Try to figure out why people are so passive and lacking in values, and try to solve that problem which is very foreign to me, even though they don’t want me to solve that problem and will not help me (they aren’t opposed to me solving it either; they don’t know how to particularly care).

I can’t trust high prestige people to be better. Social climbing isn’t a good sign. Someone being respected or famous or rich doesn’t mean they’re very rational, or that they want to make progress, or that they’ll have much interest in learning and self-improvement. Conventional track records mean little.

The people who are so neutral, passive, boring and gray usually do care a lot, and react strongly, to some things. Some have a political tribe or care about celebrity gossip. Broadly, they’re second-handers who react to the opinions of others. Instead of having and living by their own values, they try to use the values of people around them as substitutes. The people around them are the same way, so you can get a chain of people trying to use the second-hand values of other people who are doing the same thing with other people still who are doing the same thing, and so on. Like in The Fountainhead by Ayn Rand:

A world where the thought of each man will not be his own, but an attempt to guess the thought in the brain of his neighbor who’ll have no thought of his own but an attempt to guess the thought of the next neighbor who’ll have no thought—and so on, Peter, around the globe.

Anyway, I think I’ve assumed too much that people want to make progress instead of addressing the hard problem that they don’t really. And it’s hard to speak to someone who doesn’t really want to make progress, because what do I have to offer them? I can offer things like more effective ways to achieve goals, but what good is that to someone who doesn’t really care about their goals or their life? People generally don’t take their own goals very seriously.


Elliot Temple | Permalink | Messages (0)

Breaking One’s Word

Most people seem to think it isn’t a big deal to break their word. It’s a “small” error to say they will do something and then not do it. Or maybe not even an error: they seem to do that on purpose as a conflict avoidance strategy. That’s often sabotaging: if they hadn’t said they would do it, other arrangements would have been made to get it done.

This has come up with rational debate policies. Due to my ideas, several people besides me have posted policies in writing. But none of them could be relied on to keep his word, and some have broken their word. A policy offering guarantees/promises doesn’t mean much unless the author is trustworthy, and most people aren’t trustworthy.

Trust also comes up when writing policy conditions. Suppose my policy says “If you agree to do X, then I will do Y.” Unfortunately, many people will simply agree to X, get Y, then break their word. Policies based on other people agreeing to stuff only works well if they aren’t liars.

Making the other person go first can help deal with untrusted people but often doesn’t work. E.g. I might want them to agree to a condition for how a debate may end, in which case I’ll have to have to talk with them for a while before we get to the end where they might break their word. Or I might want them to agree to a condition for how to behave during a debate or what procedures to follow, and they might do what they said for a while, then break their word midway through the debate.

Potential solutions include only making agreements with people with substantial reputations or requiring people to put thousands of dollars in escrow, with a neutral arbiter who will give it to me if the person didn’t follow the rules they agreed to. These approaches are problematic and would prevent most discussions from happening. But there are no serious consequences when people with no reputation, and no money at stake, break their word. And I generally only want discussions that meet some criteria that are mutually agreed upon in advance, not just any discussion with no standards whatsoever.

It’s questionable how much breaking one’s word affects people with big, positive reputations. Often I think it wouldn’t matter much anyway. Their fans wouldn’t care or might not even find out that it happened. Most people with a lot of fans wouldn’t give me any way to tell all their fans that they broke their word. They don’t have anything like a forum where I can write something that most of their fans would see. Instead, they can communicate with lots of fans with e.g. a newsletter, tweets, or new facebook posts – but if they break their word to me, they will never give me access to those things to tell their fans what they did.

My main goal with rationality policies is to explain what is rational. I’m trying to understand what people could do that would work if they did it. Creating practical solutions, that will work today, is secondary. I try to offer realistic options, and not everyone is dishonest, but many people are and I don’t have a great solution for this situation. People could study integrity as a prerequisite for having a debate policy, but I don’t expect people lacking integrity to actually do that.

One of my main motives for placing conditions on discussions is that people abruptly leave in the middle. I don’t want half-discussions. Why? Because I’ve already had the first half of too many discussions too many times. It’s the second half that contains more new information. The first half often goes over standard, well-known issues in order to get to the point of saying new things.

If people would make it a very high priority to keep their word, they would be better people, have better lives, and treat others better. But they don’t want to.

People find it socially convenient to lie. It helps them avoid conflict by lying to hide disagreements. It helps them avoid being judged negatively by lying to avoid admitting to believing anything that the person they’re speaking with considers wrong or stupid. It helps them pretend to be agreeable by agreeing to things they won’t do. Lying helps them flatter others to try to manipulate them. Lying helps them prop up their self-esteem and reputation by pretending to be something they aren’t. Lying helps them avoid the effort of thinking about what they will and won’t do, or do or don’t believe, or under what conditions their plans might fail.

People often plan to do something, say they will do it, but don’t make reasonable arrangements to make sure it actually happens. They don’t bother to set an alarm, then forget. They have no reliable project management approach – such as a todo list that they habitually check several times per day – and then say they will do something even though they aren’t in a position to reliably do anything that isn’t a habit. They don’t want to face the reality of how unreliable they are. The core problem is often that they don’t know how to follow through on their plans, and they then tack on lying to people to avoid facing reality. They tell themselves it wasn’t lying because they said they would do something and genuinely intended to do it at the time they said that. But it is lying to say you’ll do something if you aren’t going to make appropriate plans and arrangements so that it actually, reliably gets done.

Regarding debates or other interactions, I could ask people what they’ve done to improve their integrity (and rationality and skill) to be way better than culturally normal. If they haven’t put in the work, then they shouldn’t expect to be significantly better than convention at integrity, even in their own opinion. If they claim to be better anyway, they are demonstrating a lack of integrity. And if they don’t claim to be better, then they should agree with me that they aren’t in a position to agree to the debate rules – they don’t know if they will actually keep their word about that.

There are various other ways I could ask hard questions and filter people out, but I want to allow people to actually have some discussions with me. I don’t want to reject everyone as too flawed even if I’m correct about the flaws.

There are problems here that could use better solutions.


See also my article on Lying.


Elliot Temple | Permalink | Messages (0)

“Small” Fraud by Tyson and Food Safety Net Services

This is a followup for my article “Small” Errors, Frauds and Violences. It discusses a specific example of “small” fraud.


Tyson is a large meat processing company that gets meat from factory farms. Tyson’s website advertises that their meat that passes objective inspections and audits (mirror) from unbiased third parties.

Tyson makes these claims because these issues matter to consumers and affect purchasing. For example, a 2015 survey found that “56 percent of US consumers stop buying from companies they believe are unethical” and 35% would stop buying even if there is no substitute available. So if Tyson is lying to seem more ethical, there is actual harm to consumers who bought products they wouldn’t have bought without being lied to, so it’d qualify legally as fraud.

So if Tyson says (mirror) “The [third party] audits give us rigorous feedback to help fine tune our food safety practices.”, that better be true. They better actually have internal documents containing text which a reasonable person could interpret as “rigorous feedback”. And if Tyson puts up a website section about animal welfare on their whole website about sustainability, their claims better be true.

I don’t think this stuff is false in a “big” way. E.g., they say they audited 50 facilities in 2021 just for their “Social Compliance Auditing program”. Did they actually audit 0 facilities? Are they just lying and making stuff up? I really doubt it.

But is it “small” fraud? Is it actually true that the audits give them rigorous feedback? Are consumers being misled?

I am suspicious because they get third party audits from Food Safety Net Services, an allegedly independent company that posts partisan meat propaganda (mirror) on their own public website.

How rigorous or independent are the audits from a company that markets (mirror) “Establishing Credibility” as a service they provide while talking about how you need a “non-biased, third-party testing facility” (themselves) and saying they’ll help you gain the “trust” of consumers? They obviously aren’t actually non-biased since they somehow think posting partisan meat propaganda on their website is fine while trying to claim non-bias.

Food Safety Net Services don’t even have a Wikipedia page or other basic information about them available, but they do say (mirror) that their auditing:

started as a subset of FSNS Laboratories in 1998. The primary focus of the auditing group was product and customer-specific audits for laboratory customers. With a large customer base in the meat industry, our auditing business started by offering services specific to meat production and processing. … While still heavily involved in the meat industry, our focus in 2008 broadened to include all food manufacturing sites.

The auditing started with a pre-existing customer base in the meat industry, and a decade later expanded to cover other types of food. It sounds independent like how Uber drivers are independent contractors or how many Amazon delivery drivers work for independent companies. This is the meat industry auditing itself, displaying their partisan biases in public, and then claiming they have non-biased, independent auditing. How can you do a non-biased audit when you have no other income and must please your meat customers? How can you do a non-biased meat audit when you literally post meat-related propaganda articles on your website?

How can you do independent, non-biased audits when your meat auditing team is run by meat industry veterans? Isn’t it suspicious that your “Senior Vice President of Audit Services” “spent 20 years in meat processing facilities, a majority of the time in operational management. Operational experience included steak cutting, marinating, fully cooked meat products, par fry meat and vegetables, batter and breaded meat and vegetables, beef slaughter and fabrication, ground beef, and beef trimmings.” (source). Why exactly is she qualified to be in charge of non-biased audits? Did she undergo anti-bias training? What has she done to become unbiased about meat after her time in the industry? None of the her listed credentials actually say anything about her ability to be unbiased about meat auditing. Instead of trying to establish her objectivity in any way, they brag about someone with “a strong background in the meat industry” performing over 300 audits.

Their Impartiality Statement is one paragraph long and says “Team members … have agreed to operate in an ethical manner with no conflict or perceived conflict of interest.” and employees have to sign an ethics document promising to disclose conflicts of interest. That’s it. Their strategy for providing non-biased audits is to make low-level employees promise to be non-biased in writing, that way if anything goes wrong management can put all the blame on the workers and claim the workers defrauded them by falsely signing the contracts they were required to sign to be hired.

Is this a ridiculous joke, lawbreaking, or a “small” fraud that doesn’t really matter, or a “small” fraud that actually does matter? Would ending practices like this make the industry better and lead to more sanitary conditions for farm animals, or would it be irrelevant?

I think ending fraud would indirectly result better conditions for animals and reducing their suffering (on the premise that animals can suffer). Companies would have to make changes, like using more effective audits, so that their policies are followed more. And they’d have to change their practices to better match what the public thinks is OK.

This stuff isn’t very hard to find, but in a world where even some anti-factory-farm activists don’t care (and actually express high confidence about the legal innocence of the factory farm companies), it’s hard to fix.

Though some activists actually have done some better and more useful work. For example, The Humane League has a 2021 report about slaughterhouses not following the law. Despite bias, current auditing practices already show many violations. That’s not primarily about fraud, but it implies fraud because the companies tell the public that their meat was produced in compliance with the law.


Elliot Temple | Permalink | Messages (0)

“Small” Errors, Frauds and Violences

People often don’t want to fix “small” problems. They commonly don’t believe that consistently getting “small” stuff right would lead to better outcomes for big problems.

“Small” Intellectual Errors

For example, people generally don’t think avoiding misquotes, incorrect cites, factual errors, math errors, grammar errors, ambiguous sentences and vague references would dramatically improve discussions.

They’ve never tried it, don’t have the skill to try it even if they wanted to, and have no experience with what it’d be like.

But they believe it’d be too much work because they aren’t imagining practicing these skills until they’re largely automated. If you do all the work with conscious effort, that would indeed be too much work, as it would be for most things.

You automatically use many words for their correct meanings, like “cat” or “table”. What you automatically and reliably get right, with ease, can be expanded with study and practice. What you find intuitive or second-nature can be expanded. Reliably getting it right without it taking significant conscious is called mastery.

But you can basically only expand your mastery to “small” issues. You can’t just take some big, hard, complex thing with 50 parts and master it as a single, whole unit. You have to break it into those 50 parts and mastery them individually. You can only realistically work on a few at a time.

So if you don’t want to work on small things, you’ll be stuck. And most people are pretty stuck on most topics, so that makes sense. The theory fits with my observations.

Also, in general, you can’t know how “small” an error is until after you fix it. Sometimes what appears to be a “small” error turns out very important and requires large changes to fix. And sometimes what appears to be a “big” error can be fixed with one small change. After you understand the error and its solution, you can judge its size. But when there are still significant unknowns, you’re just guessing. So if you refuse to try to fix “small” errors, you will inevitably guess that some “big” errors are small and then refuse to try to fix them.

Factory Farm Fraud

Similarly, animal welfare activists generally don’t believe that policing fraud is a good approach to factory farms. Fraud is too “small” of an issue which doesn’t directly do what they want, just like how avoiding misquotes is too “small” of an issue which doesn’t directly make conversations productive.

Activists tend to want to help the animals directly. They want better living conditions for animals. They broadly aren’t concerned with companies putting untrue statements on their websites which mislead the public. Big lies like “our chickens spend their whole lives in pasture” when they’re actually kept locked in indoor cages would draw attention. Meat companies generally don’t lie that egregiously, but they do make many untrue and misleading statements which contribute to the public having the wrong idea about what farms are like.

Fraud is uncontroversially illegal. But many people wouldn’t really care if a company used a misquote in an ad. That would be “small” fraud. Basically, I think companies should communicate with the public using similar minimal standards to what rational philosophy discussions should use. They don’t have to be super smart or wise, but they should at least get basics right. By basics I mean things where there’s no real controversy about what the correct answer is. They should quote accurately, cite accurately, get math right, get facts right, avoid statements that are both ambiguous and misleading, and get all the other “small” issues right. Not all of these are fraud issues. If a person in a discussion makes an ad hominem attack instead of an argument, that’s bad. If a company does it on their website, that’s bad too, but it’s not fraud, it’s just dumb. But many types of “small” errors, like wrong quotes or facts in marketing materials, can be fraud.

What Is Fraud?

Legally, fraud involves communicating something false or misleading about something where there is an objective, right answer (not something which is a matter of opinion). Fraud has to be knowing or reckless, not an innocent accident. If they lied on purpose, or they chose not to take reasonable steps to find out if what they said is true or false, then it can be fraud. Fraud also requires harm – e.g. consumers who made purchasing decisions partly based on fraudulent information. And all of this has to be judged according to current, standard ideas in our society, not by using any advanced but unpopular philosophy analysis.

Does “Small” Fraud Matter?

There’s widespread agreement that it’s important to police a “big” fraud like FTX, Enron, Theranos, Bernie Madoff’s ponzi scheme, or Wells Fargo creating millions of accounts that people didn’t sign up for.

Do large corporations commit “small” frauds that qualify according to the legal meaning of fraud that I explained? I believe they do that routinely. It isn’t policed well. It could be.

If smaller frauds were policed well, would that help much? I think so. I think the effectiveness would be similar to the effectiveness of policing “small” errors in intellectual discussions. I think even many people who think it’d be ineffective can agree with me that it’d be similarly effective in the two different cases. There’s a parallel there.

Disallowing fraud is one of the basics of law and order, after disallowing violence. It’s important to classical liberalism and capitalism. It’s widely accepted by other schools of thought too, e.g. socialists and Keynesians also oppose fraud. But people from all those schools of thoughts tend not to care about “small” fraud like I do.

Fraud is closely related to breach of contract and theft. Suppose I read your marketing materials, reasonably conclude that your mattress doesn’t contain fiberglass, and buy it. The implied contract is that I trade $1000 for a mattress with various characteristics like being new, clean, a specific size, fiberglass-free and shipped to my home. If the mattress provided doesn’t satisfy the (implied) contract terms, then the company has not fulfilled their side of the contract. They are guilty of breach of contract. They therefore, in short, have no right to receive money from me as specified in the contract that they didn’t follow. If they keep my money anyway then, from a theoretical perspective, that’s theft because they have my property and won’t give it back. I could sue (and that’s at least somewhat realistic). Many people would see the connection to breach of contract and theft if the company purposefully shipped an empty box with no mattress in it, but fewer people seem to see it if they send a mattress which doesn’t match what was advertised in a “smaller” way.

“Small” Violence

Disallowing “small” instances of violence is much more popular than disallowing “small” frauds, but not everyone cares about it. Some people think pushing someone at a bar, or even getting in a fist fight, is acceptable behavior. I think it’s absolutely unacceptable to get into someone’s personal space in an intimidating manner, so that they reasonably fear that you might touch them in any way without consent. Worse is actually doing any intentional, aggressive, non-consensual touch. I wish this was enforced better. I think many people do have anti-violence opinions similar to mine. There are people find even “small” violence horrifying and don’t want it to happen to anyone. That viewpoint exists for violence in a way that I don’t think it does for fraud or misquotes.

Note that I was discussing the enforcement of “small” violence to strangers. Unfortunately, people’s attitudes tend to be worse when it’s done to your wife, your girlfriend, your own child, etc. Police usually treat “domestic” violence differently than other violence and do less to stop it. However, again, lots of people besides me do want better protection for victims.

Maybe after “small” violence is more thoroughly rejected by almost everyone, society will start taking “small” fraud and “small” breach of contract more seriously.

Why To Improve “Small” Problems

Smaller problems tend to provide opportunities for improvement that are easier to get right, easier to implement, simpler, and less controversial about what the right answer is.

Basically, fix the easy stuff first and then you’ll get into a better situation and can reevaluate what problems are left. Fixing a bunch of small problems will usually help with some but not all of the bigger or harder problems.

Also, the best way to solve hard problems is often to break them down into small parts. So then you end up solving a bunch of small problems. This is just like learning a big, hard subject by breaking it down into many parts.

People often resist this, but not because they disagree that the small problem is bad or that your fix will work. There are a few reasons they resist it:

  • They are in a big hurry to work directly on the big problem that they care about
  • They are skeptical that the small fixes will add up to much or make much difference or be important enough
  • They think the small fixes will take too much work

Why would small fixes take a lot of work? Because people don’t respect them, sabotage them, complain about them, etc., instead of doing them. People make it harder than it has to be, then say it’s too hard.

Small fixes also seem like too much work if problem solving is broken to the point that fixing anything is nearly impossible. People in that situation often don’t even want to try to solve a problem unless the rewards are really bad (or the issue is so small that they don’t recognize what they’re doing as problem solving – people who don’t want to solve “small” problems often do solve hundreds of tiny problems every day).

If you’re really stuck on problem solving and can barely solve anything, working on smaller problems can help you get unstuck. If you try to work on big problems, it’s more overwhelming and gives you more hard stuff to deal with at once. The big problem is hard and getting unstuck is hard, so that’s at least two things. It’d be better to get unstuck with a small, easy problem that is unimportant (so the stakes are low and you don’t feel much pressure), so the only hard part is whatever you’re stuck on, and everything else provides minimal distraction. Though I think many of these people want to be distracted from the (irrational) reasons they’re stuck and failing to solve problems, rather than wanting to face and try to solve what’s going on there.

Small fixes also seem to hard if you imagine doing many small things using conscious effort and attention. To get lots of small things right, you must practice, automatize, and use your subconscious. If you aren’t doing that, you aren’t going to be very effective at small or big things. Most of your brainpower is in your subconscious.


See also my articles Ignoring “Small” Errors and “Small” Fraud by Tyson and Food Safety Net Services.


Elliot Temple | Permalink | Messages (0)

EA Judo and Economic Calculation

Effective Altruism (EA) claims to like criticism. They have a common tactic: they say thanks for this criticism. Our critics make us stronger. We have used this information to fix the weakness. So don’t lower your opinion of EA due to the criticism; raise it due to one more weakness being fixed.

The term “EA Judo” comes from Notes on Effective Altruism by Michael Nielsen which defines EA Judo as:

strong critique of any particular "most good" strategy improves EA, it doesn't discredit it

This approach doesn’t always work. Some criticisms are hard to fix. Some criticisms require large changes to fix that EAs don’t want to make, such as making large changes to which causes they put money towards.

EA could sometimes be criticized for not having figured out a criticism themselves sooner, which shows a lack of intellectual rigor, leadership, organization, effort, something … which is much harder to fix than addressing one concrete weakness.

They like criticisms like “X is 10% more important, relative to Y, than you realized” at which point they can advise people to donate slightly more to X which is an easy fix. But they don’t like criticism of their methodology or criticism of how actually one of their major causes is counter-productive and should be discontinued.

The same pretending-to-like-criticism technique was the response Ludwig von Mises got from the socialists 100 years ago.

Mises told them a flaw in socialism (the economic calculation problem). At first they thought they could fix it. They thanked him for helping make socialism better.

Their fixes were superficial and wrong. He explained that their fixes didn’t work, and also why his criticism was deeper and more fundamental than they had recognized.

So then, with no fixes in sight, they stopped speaking with him. Which brings us to today when socialists, progressives and others still don’t really engage with Mises or classical liberalism.

EA behaves the same way. When you make criticisms that are harder to deal with, you get ignored. Or you get thanked and engaged with in non-impactful ways, then nothing much changes.


Elliot Temple | Permalink | Messages (0)

What Is Intelligence?

Intelligence is a universal knowledge creation system. It uses the only known method of knowledge creation: evolution. Knowledge is information adapted to a purpose.

The replicators that evolve are called ideas. They are varied and selected too.

How are they selected? By criticism. Criticisms are themselves ideas which can be criticized.

To evaluate an idea and a criticism of it, you also need context including goals (or purposes or values or something else about what is good or bad). Context and goals are ideas too. They too can be replicated, varied, selected/criticized, improved, thought about, etc.

A criticism explains that an idea fails at a goal. An idea, in isolation from any purpose, cannot be evaluated effectively. There need to be success and failure criteria (the goal, and other results that aren’t the goal) in order to evaluate and criticize an idea. The same idea can work for one goal and fail for another goal. (Actually, approximately all ideas work for some goals and fail for others.)

How do I know this? I regard it as the best available theory given current knowledge. I don’t know a refutation of it and I don’t know of any viable alternative.

Interpretations of information (such as observations or sense data) are ideas too.

Emotions are ideas too but they’re often connected to preceding physiological states which can make things more complicated. They can also precede changes in physiological states.

We mostly use our ideas in a subconscious way. You can think of your brain like a huge factory and your conscious mind like one person who can go around and do jobs, inspect workstations, monitor employees, etc. But at any time, there’s a lot of work going on which he isn’t seeing. The conscious mind has limited attention and needs to delegate a ton of stuff to the subconscious after figuring out how to do it. This is done with e.g. practice to form habits – a habit means your subconscious is doing at least part of the work so it can seem (partly) automatic from the perspective of your consciousness.

Your conscious mind could also be thought of as a small group of people at the factory who often stick together but can split up. There are claims that we can think about up to roughly seven things at once, or keep up to seven separate things in active memory, or that we can do some genuine multi-tasking (meaning doing multiple things at once, instead of doing only one at a time but switching frequently).

Do to limited conscious attention and limited short-term memory, one of the main things we do is take a few ideas and combine them into one new idea. That’s called integration. We take what used to be several mental units and create a new, single mental unit which has a lot of the value of its components. But it’s just one thing so it costs less attention than the previous multiple things. By repeating this process, we can get advanced ideas.

If you combine four basic ideas, we can call the new idea a level 1 idea. Combine four level 1 ideas and you get a level 2 idea. Keep going and you can get a level 50 idea eventually. You can also combine ideas that aren’t from the same level, and you can combine different numbers of ideas.

This may create a pyramid structure with many more low level ideas than high level ideas. But it doesn’t necessarily have to. Say you have 10 level 0 ideas at the foundation. You can make 210 different combinations of four ideas from those original 10. You can also make 45 groups of two ideas, 120 groups of three, and 252 groups of five. (This assumes the order of the ideas don’t matter, and each idea can only be used once in a combination, or else you’d get even more combinations.)


Elliot Temple | Permalink | Messages (0)

Food Industry Problems Aren’t Special

Many factory farms are dirty and problematic, but so are the workplaces for many (illegally underpaid) people sewing garments in Los Angeles.

Factory farms make less healthy meat, but the way a lot of meat is processed also makes it less healthy. And vegetable oil may be doing more harm to people’s health than meat. Adding artificial colorings and sweeteners to food does harm too, or piling in extra sugar. The world is full of problems.

You may disagree with me about some of these specific problems. My general point stands even if, actually, vegetable oil is fine. If you actually think the world is not full of huge problems, then we have a more relevant disagreement. In that case, you may wish to read some of my other writing about problems in the world or debate me.

The food industry, to some extent, is arrogantly trying to play God. They want to break food down into components (like salt, fat, sugar, protein, color and nutrients) and then recombine the components to build up ideal, cheap foods however they want. But they don’t know what they’re doing. They will remove a bunch of vitamins, then add back in a few that they know are important, while failing to add others back in. They have repeatedly hurt people by doing stuff like this. It’s especially dangerous when they think they know all the components needed to make baby formula, but they don’t – e.g. they will just get fat from soy and think they replaced fat with fat so it’s fine. But soy has clear differences from natural breast milk, such as a different fatty acid profile. They also have known for decades that the ratio of omega 3 and 6 fatty acids you consume is important, but then they put soy oil in many things and give people unhealthy ratios with too much omega 6. Then they also put out public health advice saying to eat more omega 3’s but not to eat fewer omega 6’s, even though they know it’s the ratio that matters and they know tons of people get way too much omega 6 (including infants).

Mixing huge batches of lettuce or ground beef (so if any of it has E. Coli, the whole batch is contaminated) is similar to how Amazon commingles inventory from different sellers (including themselves), doesn’t track what came from who, and thereby encourages fraud because when fraud happens they have no idea which seller sent in the fraudulent item. That doesn’t stop Amazon from blaming and punishing whichever seller was getting paid for the particular sale when the fraud was noticed, even though he’s probably innocent. Due to policies like these, Amazon has a large amount of fraud on its platforms. Not all industries, in all areas, have these problems. E.g. I have heard of milk samples being tested from dairy farms, before mixing the milk together from many farms, so that responsibility for problems can be placed on the correct people. Similarly, if they wanted to, Amazon could keep track of which products are sent in from which sellers in order to figure out which sellers send in fraudulent items.

Activists who care to consider the big picture should wonder why there are problems in many industries, and wonder what can be done about them.

For example, enforcing fraud laws better would affect basically all industries at once, so that is a candidate idea that could have much higher leverage.

Getting people to stop picking fights without 80% majorities could affect fighting, activism, tribalism and other problems across all topics, so it potentially has very high leverage.

Limiting the government’s powers to favor companies would apply to all industries.

Basic economics education could help people make better decisions about any industry and better judge what sort of policies are reasonable for any industry.

Dozens more high-leverage ideas could be brainstormed. The main obstacle to finding them is that people generally aren’t actually trying. Activists tend to be motivated by some concrete issue in one area (like helping animals or the environment, or preventing cancer, or helping kids or the elderly or battered women), not by abstract issues like higher leverage or fight-avoiding reforms. If a proposal is low-leverage and involves fighting with a lot of people (or powerful people) who oppose it, then in general it’s a really bad idea. Women’s shelters or soup kitchens are low leverage but largely unopposed, so that’s a lot better than a low leverage cause which many people think is bad. But it’s high leverage causes that have the potential to dramatically improve the world. Many low-leverage, unopposed causes can add up and make a big difference too. High leverage but opposed causes can easily be worse than low leverage unopposed causes. If you’re going to oppose people you really ought to aim for high leverage to try to make it worth it. Sometimes there’s an approach to a controversial cause that dramatically reduces opposition, and people should be more interested in seeking those out.


Elliot Temple | Permalink | Messages (0)

Activists Shouldn’t Fight with Anyone

This article is about how all activists, including animal activists, should stop fighting with people in polarizing ways. Instead, they should take a more intellectual approach to planning better strategies and avoiding fights.

Effective Activism

In a world with so many huge problems, there are two basic strategies for reforming things which make sense. Animal activism doesn’t fit either one. It’s a typical example, like many other types of activism, of what not to do (even if your cause is good).

Good Strategy 1: Fix things unopposed.

Work on projects where people aren’t fighting to stop you. Avoid controversy. This can be hard. You might think that helping people eat enough Vitamin A to avoid going blind would be uncontroversial, but if you try to solve the problem with golden rice then you’ll get a lot of opposition (because of GMOs). If it seems to you like a cause should be uncontroversial, but it’s not, you need to recognize and accept that in reality it’s controversial, no matter how dumb that is. Animal activism is controversial, whether or not it should be. So is abortion, global warming, immigration, economics, and anything else that sounds like a live political issue.

A simple rule of thumb is that 80% of people who care, or who will care before you’re done, need to agree with you. 80% majorities are readily available on many issues, but extremely unrealistic in the foreseeable future on many other issues like veganism or ending factory farming.

In the U.S. and every other reasonably democratic country, if you have an 80% majority on an issue, it’s pretty easy to get your way. If you live in a less democratic country, like Iran, you may have to consider a revolution if you have an 80% majority but are being oppressed by violent rulers. A revolution with a 52% majority is a really bad idea. Pushing hard for a controversial cause in the U.S. with a 52% majority is a bad idea too, though not as bad as a revolution. People should prioritize not fighting with other people a lot more than they do.

Put another way: Was there election fraud in the 2020 U.S. Presidential election? Certainly. There is every year. Did the fraud cost Trump the election? Maybe. Did Trump have an 80% majority supporting him? Absolutely not. There is nowhere near enough fraud to beat an 80% majority. If fraud swung the election, it was close anyway, so it doesn’t matter much who won. You need accurate enough elections that 80% majorities basically always win. The ability for the 80% to get their way is really important for enabling reform. But you shouldn’t care very much whether 52% majorities get their way. (For simplicity, my numbers are for popular vote elections with two candidates, which is not actually how U.S. Presidential elections work. Details vary a bit for other types of elections.)

Why an 80% majority instead of 100%? It’s more practical and realistic. Some people are unreasonable. Some people believe in UFOs or flat Earth. The point is to pick a high number that is realistically achievable when you persuade most people. We really do have 80% majorities on tons of issues, like whether women should be allowed to vote (which used to be controversial, but isn’t today). Should murder be illegal? That has well over an 80% majority. Are seatbelts good? Should alcohol be legal? Should we have some international trade instead of fulling closing our borders? Should parents keep their own kids instead of having the government raise all the children? These things have over 80% majorities. The point is to get way more than 50% agreement but without the difficulties of trying to approach 100%.

Good Strategy 2: Do something really, really, really important.

Suppose you can’t get a clean, neat, tidy, easy or unopposed victory for your cause. And you’re not willing to go do something else instead. Suppose you’re actually going to fight with a lot of people who oppose you and work against them. Is that ever worth it or appropriate? Rarely, but yes, it could be. Fighting with people is massively overrated but I wouldn’t say to never do it. Even literal wars can be worth it, though they usually aren’t.

If you’re going to fight with people, and do activism for an opposed cause, it better be really really worth it. That means it needs high leverage. It can’t just be one cause because you like that cause. The cause needs to be a meta cause that will help with many future reforms. For example, if Iran has a revolution and changes to a democratic government, that will help them fix many, many issues going forward, such as laws about homosexuality, dancing, or female head coverings. It would be problematic to pick a single issue, like music, and have a huge fight in Iran to get change for just that one thing. If you’re going to have a big fight, you need bigger rewards than one single reform.

If you get some Polish grocery stores to stop selling live fish, that activism is low leverage. Even if it works exactly as intended, and it’s an improvement, it isn’t going to lead to a bunch of other wonderful results. It’s focused on a single issue, not a root cause.

If you get factory farms to change, it doesn’t suddenly get way easier to do abortion-related reforms. You aren’t getting at root causes, such as economic illiteracy or corrupt, lobbyist-influenced lawmakers, which are contributing to dozens of huge problems. You aren’t figuring out why so many large corporations are so awful and fixing that, which would improve factory farms and dozens of other things too. You aren’t making the world significantly more rational, nor making the government significantly better at implementing reforms.

If you don’t know how to do a better, more powerful reform, stop and plan more. Don’t just assume it’s impossible. The world desperately needs more people willing to be good intellectuals who study how things work and create good plans. Help with that. Please. We don’t need more front-line activists working on ineffective causes, fighting with people, and being led by poor leaders. There are so many front-line activists who are on the wrong side of things, fighting others who merely got lucky to be on the right side (but tribalist fighting is generally counter-productive even if you’re on the right side).

Fighting with people tends to be polarizing and divisive. It can make it harder to persuade people. If the people who disagree with you feel threatened, and think you might get the government to force your way of life on them, they will dig in and fight hard. They’ll stop listening to your reasoning. If you want a future society with social harmony, agreement and rational persuasion, you’re not helping by fighting with people and trying to get laws made that many other people don’t want, which reduces their interest in constructive debate.

Root Causes

The two strategies I brought up are doing things with little opposition or doing things that are really super important and can fix many, many things. Be very skeptical of that second one. It’s a form of the “greater good” argument. Although fighting with people is bad, it could be worth it for the greater good if it fixes some root causes. But doing something with clear, immediate negatives in the name of the greater good rarely actually works out well.

There are other important ways to strategize besides looking for lack of opposition or importance. You can look for root causes instead of symptoms. Factory farms are downstream of various other problems. Approximately all the big corporations suck, not just the big farms. Why is that? What is going on there? What is causing that? People who want to fix factory farms should look into this and get a better understanding of the big picture.

I do believe that there are many practical, reasonable ways that factory farms could be improved, just like I think keeping factories clean in general is good for everyone from customers to workers to owners. What stops the companies from acting more reasonably? Why are they doing things that are worse for everyone, including themselves? What is broken there, and how is it related to many other types of big companies also being broken and irrational in many ways?

I have some answers but I won’t go into them now. I’ll just say that if you want to fix any of this stuff, you need a sophisticated view on the cause-and-effect relationships involved. You need to look at the full picture not just the picture in one industry before you can actually make a good plan. You also should find win/win solutions and present them in terms of mutual benefit instead of approaching it as activists fighting against enemies. You should do your best to proceed in a way where you don’t have enemies. Enemy-based activism is mostly counter-productive – it mostly makes the world worse by increasing fighting.

There are so many people who are so sure they’re right and so sure their cause is so important … and many of them are on opposite sides of the same cause. Don’t be one of those people. Stop rushing, stop fighting, read Eli Goldratt, look at the big picture, make cause-and-effect trees, make transition trees, etc., plan it all out, aim for mutual benefit, and reform things in a way with little to no fighting. If you can’t find ways to make progress without fighting, that usually means you don’t know what you’re doing and are making things worse not better (assuming a reasonably free, democratic country – this is less applicable in North Korea).


Elliot Temple | Permalink | Messages (0)