Capitalism or Charity

In an ideal capitalist society, a pretty straightforward hypothesis for how to do the most good is: make the most money you can. (If this doesn’t make sense to you, you aren’t familiar with the basic pro-capitalist claims. If you’re interested, Time Will Run Back is a good book to start with. It’s a novel about the leaders of a socialist dystopia trying to solve their problems and thereby reinventing capitalism.)

Instead of earn to give, the advice could just be earn.

What’s best to do with extra money? The first hypothesis to consider, from a capitalist perspective, is invest it. Helping with capital accumulation will do good.

I don’t think Effective Altruism (EA) has any analysis or refutation of these hypotheses. I’ve seen nothing indicating they understand the basic claims and reasoning of the capitalist viewpoint. They seem to just ignore thinkers like Ludwig von Mises.

We (in USA and many other places) do not live in an ideal capitalist society, but we live in a society with significant capitalist elements. So the actions we’d take in a fully capitalist society should be considered as possibilities that may work well in our society, or which might work well with some modifications.

One cause that might do a lot of good is making society more capitalist. This merits analysis and consideration which I don’t think EA has done.

What are some of the objections to making money as a way to do good?

  • Disagreement about how economics works.
  • Loopholes – a society not being fully capitalist means it isn’t doing a full job of making sure satisfying consumers is the only way to make much money. E.g. it may be possible to get rich by fraud or by forcible suppression of competition (with your own force or the help of government force).
  • This only focuses on good that people are willing to pay for. People might not pay to benefit cats, and cats don’t have money to pay for their own benefit.
  • The general public could be shortsighted, have bad taste, etc. So giving them what they want most might not do the most good. (Some alternatives, like having a society ruled by philosopher kings, are probably worse.)

What are some advantages of the making money approach? Figuring out what will do good is really hard, but market economies provide prices that give guidance about how much people value goods or services. Higher prices indicate something does more good. Higher profits indicate something is most cost effective. (Profits are the selling price minus the costs of creating the product or providing the service. To be efficient, we need to consider expenses not just revenue.)

Measuring Value

Lots of charities don’t know how to measure how much good they’re doing. EA tries to help with that problem. EA does analysis of how effective different charities are. But EA’s methods, like those of socialist central planners, aren’t very good. The market mechanism is much better at pricing things than EA is at assigning effectiveness scores to charities.

One of the main issues, which makes EA’s analysis job hard, is that different charities do qualitatively different things. EA has to compare unlike things. EA has to combine factors from different dimensions. E.g. EA tries to determine whether a childhood vaccines charity does more or less good than an AI Alignment charity.

If EA did a good job with their analysis, they could make a reasonable comparison of one childhood vaccine charity with another. But comparing different types of charities is like comparing apples to oranges. This is fundamentally problematic. One of the most impressive things about the market price system is it takes products which are totally different – e.g. food, clothes, tools, luxuries, TVs, furniture, cars – and puts them all on a common scale (dollars or more generally money). The free market is able to validly get comparable numbers for qualitatively different things. That’s an extremely hard problem in general, for complex scenarios, so basically neither EA nor central planners can do it well. (That partly isn’t their fault. It doesn’t mean they aren’t clever enough. I would fail at it too. The only way to win is stop trying to do that and find a different approach. The fault is in using that approach, not in failing to get good answers with the approach. More thoughtful or diligent analysis won’t fix this.)

See Multi-Factor Decision Making Math for more information about the problems with comparing unlike things.


Elliot Temple | Permalink | Messages (0)

Shoddy Argument Pattern

There’s a standard pattern where people want to refute what you said, but it’s hard, so they do this:

  • Come up with a new argument that you didn’t preemptively address that says you’re wrong (and often also dumb).
  • The argument is extremely shoddy and wouldn’t hold up in debate.
  • But they avoid debating it.
  • Then they think they’re right and that you can be dismissed.

Using a low quality argument works well here because people try to think of and address potential high quality objections. And other critics will likely have already told them every reasonable objection that’s easy to think of. But no one can anticipate and pre-answer every dumb objection. It may be hard to think of a reasonable criticism that other people haven’t already brought up, but it’s always easy to think of a dumb new criticism. Coming up with dumb criticisms is easy, and you can probably think of one that no one has already pointed out is dumb.

Dumb criticisms don’t do very well in debate. You need to think of it, declare victory, and then avoid debate. There are several standard tactics for avoiding debate:

  • Do this to an author who put an argument in a book, so you aren’t actually in a conversation with them.
  • Be too busy to talk more.
  • Insult people who want to debate. Say they’re too dumb or unreasonable to be worth debating.
  • Refuse to discuss for other reasons, e.g. saying you don’t owe this person answers or saying that they should go debate with someone else.
  • Don’t tell anyone your argument. Just think it in your head, decide you’re right and your opponents aren’t worth engaging with, and move on.

People frequently do this to me using the unstated arguments technique. They come up with some reason in their head that I’m wrong, don’t say it, and use it to justify not respecting me, ending discussion, believing they’re right, etc. The unstated arguments are usually very shoddy. They often do this after they are losing a debate. Then they come up with worse arguments than the ones that were losing the debate, but keep those arguments to themselves and pretend the arguments are great.

Economics Example

Another part of the pattern is sometimes people use arguments that you did anticipate, and pre-refute, in writing. They just ignore that you did that. They can rely on most of the audience being ignorant and not well-read. For example, from the introduction of The Critics of Keynesian Economics by Henzy Hazlitt (page 2):

… I have included two selections— those from Jean Baptiste Say and John Stuart Mill—that long antedated the General Theory [by Keynes] itself. The truth of the basic propositions of the General Theory rests (on the contention or admission of most Keynesians) on the truth of Keynes's "refutation" of Say's Law. But when we turn to the original statement of this law in the words of the economist after whom it is named, and to its elaboration by the classical economist who argued it most fully, we find that these statements in themselves, particularly the one by Mill, anticipated the objections of Keynes and constituted a refutation of them in advance.

Keynes made shoddy, pre-refuted arguments and has gotten away with it, and become the most influential economist in the world, due to the lack of rational discussion and debate in the world today. See also Hazlitt’s book-length, detailed refutation of Keynes (from 1959) which, as far as I know, no Keynesian has ever made a serious attempt to refute with counter-arguments. (Note that Hazlitt was pretty famous and prestigious. He published two dozen books and wrote for major newspapers for decades, including 12 years at The New York Times. But opponents still wouldn’t even try to answer his arguments.)

Listening to Keynes instead of Ludwig von Mises has made humanity many trillions of dollars poorer. Doing it when Keynes made pre-refuted, shoddy arguments … and despite Hazlitt’s books and other attempts to point that out … is really sad. And it’s a really important fact about the world. It implies e.g. that if people were a bit more rational it would have a huge impact. Better rationality could easily have changed this one thing, and many other things too. And this is still an ongoing problem: as I write this, Keynes is still the most influential economist on government policy and it’s still doing massive economic harm every year. Rational debate over the matter is still not happening.

AI Alignment Example

Another example of the shoddy argument pattern is an AI Alignment researcher (Olle Häggström, a professor and author) responding to David Deutsch’s arguments about universal intelligences. His response is that even if Deutsch is right, and AGIs won’t have super intelligence or inhuman capabilities, and there will be no singularity … it doesn’t matter. Human/AGI equality regarding intelligence is logically compatible with military inequality, so they might wipe us out. But how will the robots get a huge military advantage without being smarter than us!? We’re told that humans are currently imperfect and flawed, so I guess the claim is the robots will lack human flaws somehow despite having equivalent mental capabilities to us? Why?

This is a vague, shoddy argument. Deutsch criticized a major, important claim of his side and he didn’t even try to defend that claim. He just carelessly said that it doesn’t matter because that was easier than attempting to refute Deutsch’s argument. He changed the topic away from the arguments Deutsch made to a dumb side-issue that Deutsch hadn’t already written about. And he put it in a book, declared victory before Deutsch could respond, and insulted Deutsch’s ideas, intelligence and rationality. I don’t mean it’s an insult by implication. He just directly put an insult in his book: “Deutsch seems to have fallen in love with his own abstractions and theorizing to the extent of losing touch with the real world.” He also focused more on insults than arguments when he talked in his blog comments.

If Deutsch actually refuted the super intelligence and singularity ideas, but then got something else wrong, that’d be worth praising and engaging with. It’d be a major contribution to human knowledge. It’d be really impressive, not insult-worthy.

Insulting Deutsch also helps prevent a debate with Deutsch from every happening. Being a jerk can be an effective strategy so that the people you’re avoiding debating don’t actually want to talk with you anyway.


Elliot Temple | Permalink | Messages (0)

Do You Actually Want to Make Progress?

I’ve written a lot to help people make progress. The basic premise is a reader who wants to make progress but runs into some problems. He fails to solve some of the problems. He gets stuck. He could, therefore, be helped with advice, with new approaches to try, by learning new skills that could improve his problem solving abilities, and with other knowledge.

But even the self-selected few who read my work generally do not seem to want to make progress. I think that is what stops them. They aren’t blocked by obstacles. They are blocked by not trying, not caring, not doing much. (It’s not that they are opposed to making progress, either. They merely don’t actively, positively want to. They’re approximately neutral on the matter.)

You could say that not valuing progress much is a problem. But it’s a different sort of problem than, say, being unable to calculate a mathematical expression, being ignorant of what sentences mean, making logical errors, or being biased.

People have those other sorts of problems. They may be bad at grammar and logic, and that may prevent them from e.g. productively debating philosophy or even from productively reading Popper. But I don’t think they have those problems so badly that they’re stuck with no way to make progress. They could work on things step by step. Often they don’t want to work on more basic skills; they want to work on advanced stuff but skipping steps doesn’t work so they get stuck.

Apparently, a lot of people value engaging with clever, impressive stuff. They want to work on complex philosophy not arithmetic. They want to talk about science not what paragraphs mean. In that case, they don’t really value progress itself in a way that would motivate them to work on prerequisites and develop their skills. They value some of the results they think progress would give them. Or maybe they think skill building would take too long, so they give up and look for shortcuts (which then don’t work).

A theme in Atlas Shrugged by Ayn Rand is that most people don’t value their lives, don’t really want to live, and don’t really try to make progress. (Nor do most people particularly want to die. They are bad at wanting things, valuing things or having goals.)

People who say they are ambitious, and claim to care deeply about making progress, are usually lying to themselves. If we talk, that results in them repeating the lies to me. Lying to yourself leads to lying to others. So I cannot trust people who say they want to make progress.

So I’m left considering options like:

  • Only talk to people who already have done a lot, e.g. written 100,000 words of philosophy that didn’t suck (prior to that, they’re welcome to read my stuff, and perhaps even have very brief exchanges with me on my forum, but no serious or lengthy conversations with me).
  • Write for people who want to make progress, should any exist, and not worry about the rest.
  • Have more ways to test people, more ways they can differentiate themselves.
  • Let them figure out for themselves some ways to differentiate themselves, if they can.
  • Try to figure out why people are so passive and lacking in values, and try to solve that problem which is very foreign to me, even though they don’t want me to solve that problem and will not help me (they aren’t opposed to me solving it either; they don’t know how to particularly care).

I can’t trust high prestige people to be better. Social climbing isn’t a good sign. Someone being respected or famous or rich doesn’t mean they’re very rational, or that they want to make progress, or that they’ll have much interest in learning and self-improvement. Conventional track records mean little.

The people who are so neutral, passive, boring and gray usually do care a lot, and react strongly, to some things. Some have a political tribe or care about celebrity gossip. Broadly, they’re second-handers who react to the opinions of others. Instead of having and living by their own values, they try to use the values of people around them as substitutes. The people around them are the same way, so you can get a chain of people trying to use the second-hand values of other people who are doing the same thing with other people still who are doing the same thing, and so on. Like in The Fountainhead by Ayn Rand:

A world where the thought of each man will not be his own, but an attempt to guess the thought in the brain of his neighbor who’ll have no thought of his own but an attempt to guess the thought of the next neighbor who’ll have no thought—and so on, Peter, around the globe.

Anyway, I think I’ve assumed too much that people want to make progress instead of addressing the hard problem that they don’t really. And it’s hard to speak to someone who doesn’t really want to make progress, because what do I have to offer them? I can offer things like more effective ways to achieve goals, but what good is that to someone who doesn’t really care about their goals or their life? People generally don’t take their own goals very seriously.


Elliot Temple | Permalink | Messages (0)

Breaking One’s Word

Most people seem to think it isn’t a big deal to break their word. It’s a “small” error to say they will do something and then not do it. Or maybe not even an error: they seem to do that on purpose as a conflict avoidance strategy. That’s often sabotaging: if they hadn’t said they would do it, other arrangements would have been made to get it done.

This has come up with rational debate policies. Due to my ideas, several people besides me have posted policies in writing. But none of them could be relied on to keep his word, and some have broken their word. A policy offering guarantees/promises doesn’t mean much unless the author is trustworthy, and most people aren’t trustworthy.

Trust also comes up when writing policy conditions. Suppose my policy says “If you agree to do X, then I will do Y.” Unfortunately, many people will simply agree to X, get Y, then break their word. Policies based on other people agreeing to stuff only works well if they aren’t liars.

Making the other person go first can help deal with untrusted people but often doesn’t work. E.g. I might want them to agree to a condition for how a debate may end, in which case I’ll have to have to talk with them for a while before we get to the end where they might break their word. Or I might want them to agree to a condition for how to behave during a debate or what procedures to follow, and they might do what they said for a while, then break their word midway through the debate.

Potential solutions include only making agreements with people with substantial reputations or requiring people to put thousands of dollars in escrow, with a neutral arbiter who will give it to me if the person didn’t follow the rules they agreed to. These approaches are problematic and would prevent most discussions from happening. But there are no serious consequences when people with no reputation, and no money at stake, break their word. And I generally only want discussions that meet some criteria that are mutually agreed upon in advance, not just any discussion with no standards whatsoever.

It’s questionable how much breaking one’s word affects people with big, positive reputations. Often I think it wouldn’t matter much anyway. Their fans wouldn’t care or might not even find out that it happened. Most people with a lot of fans wouldn’t give me any way to tell all their fans that they broke their word. They don’t have anything like a forum where I can write something that most of their fans would see. Instead, they can communicate with lots of fans with e.g. a newsletter, tweets, or new facebook posts – but if they break their word to me, they will never give me access to those things to tell their fans what they did.

My main goal with rationality policies is to explain what is rational. I’m trying to understand what people could do that would work if they did it. Creating practical solutions, that will work today, is secondary. I try to offer realistic options, and not everyone is dishonest, but many people are and I don’t have a great solution for this situation. People could study integrity as a prerequisite for having a debate policy, but I don’t expect people lacking integrity to actually do that.

One of my main motives for placing conditions on discussions is that people abruptly leave in the middle. I don’t want half-discussions. Why? Because I’ve already had the first half of too many discussions too many times. It’s the second half that contains more new information. The first half often goes over standard, well-known issues in order to get to the point of saying new things.

If people would make it a very high priority to keep their word, they would be better people, have better lives, and treat others better. But they don’t want to.

People find it socially convenient to lie. It helps them avoid conflict by lying to hide disagreements. It helps them avoid being judged negatively by lying to avoid admitting to believing anything that the person they’re speaking with considers wrong or stupid. It helps them pretend to be agreeable by agreeing to things they won’t do. Lying helps them flatter others to try to manipulate them. Lying helps them prop up their self-esteem and reputation by pretending to be something they aren’t. Lying helps them avoid the effort of thinking about what they will and won’t do, or do or don’t believe, or under what conditions their plans might fail.

People often plan to do something, say they will do it, but don’t make reasonable arrangements to make sure it actually happens. They don’t bother to set an alarm, then forget. They have no reliable project management approach – such as a todo list that they habitually check several times per day – and then say they will do something even though they aren’t in a position to reliably do anything that isn’t a habit. They don’t want to face the reality of how unreliable they are. The core problem is often that they don’t know how to follow through on their plans, and they then tack on lying to people to avoid facing reality. They tell themselves it wasn’t lying because they said they would do something and genuinely intended to do it at the time they said that. But it is lying to say you’ll do something if you aren’t going to make appropriate plans and arrangements so that it actually, reliably gets done.

Regarding debates or other interactions, I could ask people what they’ve done to improve their integrity (and rationality and skill) to be way better than culturally normal. If they haven’t put in the work, then they shouldn’t expect to be significantly better than convention at integrity, even in their own opinion. If they claim to be better anyway, they are demonstrating a lack of integrity. And if they don’t claim to be better, then they should agree with me that they aren’t in a position to agree to the debate rules – they don’t know if they will actually keep their word about that.

There are various other ways I could ask hard questions and filter people out, but I want to allow people to actually have some discussions with me. I don’t want to reject everyone as too flawed even if I’m correct about the flaws.

There are problems here that could use better solutions.


See also my article on Lying.


Elliot Temple | Permalink | Messages (0)

“Small” Fraud by Tyson and Food Safety Net Services

This is a followup for my article “Small” Errors, Frauds and Violences. It discusses a specific example of “small” fraud.


Tyson is a large meat processing company that gets meat from factory farms. Tyson’s website advertises that their meat that passes objective inspections and audits (mirror) from unbiased third parties.

Tyson makes these claims because these issues matter to consumers and affect purchasing. For example, a 2015 survey found that “56 percent of US consumers stop buying from companies they believe are unethical” and 35% would stop buying even if there is no substitute available. So if Tyson is lying to seem more ethical, there is actual harm to consumers who bought products they wouldn’t have bought without being lied to, so it’d qualify legally as fraud.

So if Tyson says (mirror) “The [third party] audits give us rigorous feedback to help fine tune our food safety practices.”, that better be true. They better actually have internal documents containing text which a reasonable person could interpret as “rigorous feedback”. And if Tyson puts up a website section about animal welfare on their whole website about sustainability, their claims better be true.

I don’t think this stuff is false in a “big” way. E.g., they say they audited 50 facilities in 2021 just for their “Social Compliance Auditing program”. Did they actually audit 0 facilities? Are they just lying and making stuff up? I really doubt it.

But is it “small” fraud? Is it actually true that the audits give them rigorous feedback? Are consumers being misled?

I am suspicious because they get third party audits from Food Safety Net Services, an allegedly independent company that posts partisan meat propaganda (mirror) on their own public website.

How rigorous or independent are the audits from a company that markets (mirror) “Establishing Credibility” as a service they provide while talking about how you need a “non-biased, third-party testing facility” (themselves) and saying they’ll help you gain the “trust” of consumers? They obviously aren’t actually non-biased since they somehow think posting partisan meat propaganda on their website is fine while trying to claim non-bias.

Food Safety Net Services don’t even have a Wikipedia page or other basic information about them available, but they do say (mirror) that their auditing:

started as a subset of FSNS Laboratories in 1998. The primary focus of the auditing group was product and customer-specific audits for laboratory customers. With a large customer base in the meat industry, our auditing business started by offering services specific to meat production and processing. … While still heavily involved in the meat industry, our focus in 2008 broadened to include all food manufacturing sites.

The auditing started with a pre-existing customer base in the meat industry, and a decade later expanded to cover other types of food. It sounds independent like how Uber drivers are independent contractors or how many Amazon delivery drivers work for independent companies. This is the meat industry auditing itself, displaying their partisan biases in public, and then claiming they have non-biased, independent auditing. How can you do a non-biased audit when you have no other income and must please your meat customers? How can you do a non-biased meat audit when you literally post meat-related propaganda articles on your website?

How can you do independent, non-biased audits when your meat auditing team is run by meat industry veterans? Isn’t it suspicious that your “Senior Vice President of Audit Services” “spent 20 years in meat processing facilities, a majority of the time in operational management. Operational experience included steak cutting, marinating, fully cooked meat products, par fry meat and vegetables, batter and breaded meat and vegetables, beef slaughter and fabrication, ground beef, and beef trimmings.” (source). Why exactly is she qualified to be in charge of non-biased audits? Did she undergo anti-bias training? What has she done to become unbiased about meat after her time in the industry? None of the her listed credentials actually say anything about her ability to be unbiased about meat auditing. Instead of trying to establish her objectivity in any way, they brag about someone with “a strong background in the meat industry” performing over 300 audits.

Their Impartiality Statement is one paragraph long and says “Team members … have agreed to operate in an ethical manner with no conflict or perceived conflict of interest.” and employees have to sign an ethics document promising to disclose conflicts of interest. That’s it. Their strategy for providing non-biased audits is to make low-level employees promise to be non-biased in writing, that way if anything goes wrong management can put all the blame on the workers and claim the workers defrauded them by falsely signing the contracts they were required to sign to be hired.

Is this a ridiculous joke, lawbreaking, or a “small” fraud that doesn’t really matter, or a “small” fraud that actually does matter? Would ending practices like this make the industry better and lead to more sanitary conditions for farm animals, or would it be irrelevant?

I think ending fraud would indirectly result better conditions for animals and reducing their suffering (on the premise that animals can suffer). Companies would have to make changes, like using more effective audits, so that their policies are followed more. And they’d have to change their practices to better match what the public thinks is OK.

This stuff isn’t very hard to find, but in a world where even some anti-factory-farm activists don’t care (and actually express high confidence about the legal innocence of the factory farm companies), it’s hard to fix.

Though some activists actually have done some better and more useful work. For example, The Humane League has a 2021 report about slaughterhouses not following the law. Despite bias, current auditing practices already show many violations. That’s not primarily about fraud, but it implies fraud because the companies tell the public that their meat was produced in compliance with the law.


Elliot Temple | Permalink | Messages (0)

“Small” Errors, Frauds and Violences

People often don’t want to fix “small” problems. They commonly don’t believe that consistently getting “small” stuff right would lead to better outcomes for big problems.

“Small” Intellectual Errors

For example, people generally don’t think avoiding misquotes, incorrect cites, factual errors, math errors, grammar errors, ambiguous sentences and vague references would dramatically improve discussions.

They’ve never tried it, don’t have the skill to try it even if they wanted to, and have no experience with what it’d be like.

But they believe it’d be too much work because they aren’t imagining practicing these skills until they’re largely automated. If you do all the work with conscious effort, that would indeed be too much work, as it would be for most things.

You automatically use many words for their correct meanings, like “cat” or “table”. What you automatically and reliably get right, with ease, can be expanded with study and practice. What you find intuitive or second-nature can be expanded. Reliably getting it right without it taking significant conscious is called mastery.

But you can basically only expand your mastery to “small” issues. You can’t just take some big, hard, complex thing with 50 parts and master it as a single, whole unit. You have to break it into those 50 parts and mastery them individually. You can only realistically work on a few at a time.

So if you don’t want to work on small things, you’ll be stuck. And most people are pretty stuck on most topics, so that makes sense. The theory fits with my observations.

Also, in general, you can’t know how “small” an error is until after you fix it. Sometimes what appears to be a “small” error turns out very important and requires large changes to fix. And sometimes what appears to be a “big” error can be fixed with one small change. After you understand the error and its solution, you can judge its size. But when there are still significant unknowns, you’re just guessing. So if you refuse to try to fix “small” errors, you will inevitably guess that some “big” errors are small and then refuse to try to fix them.

Factory Farm Fraud

Similarly, animal welfare activists generally don’t believe that policing fraud is a good approach to factory farms. Fraud is too “small” of an issue which doesn’t directly do what they want, just like how avoiding misquotes is too “small” of an issue which doesn’t directly make conversations productive.

Activists tend to want to help the animals directly. They want better living conditions for animals. They broadly aren’t concerned with companies putting untrue statements on their websites which mislead the public. Big lies like “our chickens spend their whole lives in pasture” when they’re actually kept locked in indoor cages would draw attention. Meat companies generally don’t lie that egregiously, but they do make many untrue and misleading statements which contribute to the public having the wrong idea about what farms are like.

Fraud is uncontroversially illegal. But many people wouldn’t really care if a company used a misquote in an ad. That would be “small” fraud. Basically, I think companies should communicate with the public using similar minimal standards to what rational philosophy discussions should use. They don’t have to be super smart or wise, but they should at least get basics right. By basics I mean things where there’s no real controversy about what the correct answer is. They should quote accurately, cite accurately, get math right, get facts right, avoid statements that are both ambiguous and misleading, and get all the other “small” issues right. Not all of these are fraud issues. If a person in a discussion makes an ad hominem attack instead of an argument, that’s bad. If a company does it on their website, that’s bad too, but it’s not fraud, it’s just dumb. But many types of “small” errors, like wrong quotes or facts in marketing materials, can be fraud.

What Is Fraud?

Legally, fraud involves communicating something false or misleading about something where there is an objective, right answer (not something which is a matter of opinion). Fraud has to be knowing or reckless, not an innocent accident. If they lied on purpose, or they chose not to take reasonable steps to find out if what they said is true or false, then it can be fraud. Fraud also requires harm – e.g. consumers who made purchasing decisions partly based on fraudulent information. And all of this has to be judged according to current, standard ideas in our society, not by using any advanced but unpopular philosophy analysis.

Does “Small” Fraud Matter?

There’s widespread agreement that it’s important to police a “big” fraud like FTX, Enron, Theranos, Bernie Madoff’s ponzi scheme, or Wells Fargo creating millions of accounts that people didn’t sign up for.

Do large corporations commit “small” frauds that qualify according to the legal meaning of fraud that I explained? I believe they do that routinely. It isn’t policed well. It could be.

If smaller frauds were policed well, would that help much? I think so. I think the effectiveness would be similar to the effectiveness of policing “small” errors in intellectual discussions. I think even many people who think it’d be ineffective can agree with me that it’d be similarly effective in the two different cases. There’s a parallel there.

Disallowing fraud is one of the basics of law and order, after disallowing violence. It’s important to classical liberalism and capitalism. It’s widely accepted by other schools of thought too, e.g. socialists and Keynesians also oppose fraud. But people from all those schools of thoughts tend not to care about “small” fraud like I do.

Fraud is closely related to breach of contract and theft. Suppose I read your marketing materials, reasonably conclude that your mattress doesn’t contain fiberglass, and buy it. The implied contract is that I trade $1000 for a mattress with various characteristics like being new, clean, a specific size, fiberglass-free and shipped to my home. If the mattress provided doesn’t satisfy the (implied) contract terms, then the company has not fulfilled their side of the contract. They are guilty of breach of contract. They therefore, in short, have no right to receive money from me as specified in the contract that they didn’t follow. If they keep my money anyway then, from a theoretical perspective, that’s theft because they have my property and won’t give it back. I could sue (and that’s at least somewhat realistic). Many people would see the connection to breach of contract and theft if the company purposefully shipped an empty box with no mattress in it, but fewer people seem to see it if they send a mattress which doesn’t match what was advertised in a “smaller” way.

“Small” Violence

Disallowing “small” instances of violence is much more popular than disallowing “small” frauds, but not everyone cares about it. Some people think pushing someone at a bar, or even getting in a fist fight, is acceptable behavior. I think it’s absolutely unacceptable to get into someone’s personal space in an intimidating manner, so that they reasonably fear that you might touch them in any way without consent. Worse is actually doing any intentional, aggressive, non-consensual touch. I wish this was enforced better. I think many people do have anti-violence opinions similar to mine. There are people find even “small” violence horrifying and don’t want it to happen to anyone. That viewpoint exists for violence in a way that I don’t think it does for fraud or misquotes.

Note that I was discussing the enforcement of “small” violence to strangers. Unfortunately, people’s attitudes tend to be worse when it’s done to your wife, your girlfriend, your own child, etc. Police usually treat “domestic” violence differently than other violence and do less to stop it. However, again, lots of people besides me do want better protection for victims.

Maybe after “small” violence is more thoroughly rejected by almost everyone, society will start taking “small” fraud and “small” breach of contract more seriously.

Why To Improve “Small” Problems

Smaller problems tend to provide opportunities for improvement that are easier to get right, easier to implement, simpler, and less controversial about what the right answer is.

Basically, fix the easy stuff first and then you’ll get into a better situation and can reevaluate what problems are left. Fixing a bunch of small problems will usually help with some but not all of the bigger or harder problems.

Also, the best way to solve hard problems is often to break them down into small parts. So then you end up solving a bunch of small problems. This is just like learning a big, hard subject by breaking it down into many parts.

People often resist this, but not because they disagree that the small problem is bad or that your fix will work. There are a few reasons they resist it:

  • They are in a big hurry to work directly on the big problem that they care about
  • They are skeptical that the small fixes will add up to much or make much difference or be important enough
  • They think the small fixes will take too much work

Why would small fixes take a lot of work? Because people don’t respect them, sabotage them, complain about them, etc., instead of doing them. People make it harder than it has to be, then say it’s too hard.

Small fixes also seem like too much work if problem solving is broken to the point that fixing anything is nearly impossible. People in that situation often don’t even want to try to solve a problem unless the rewards are really bad (or the issue is so small that they don’t recognize what they’re doing as problem solving – people who don’t want to solve “small” problems often do solve hundreds of tiny problems every day).

If you’re really stuck on problem solving and can barely solve anything, working on smaller problems can help you get unstuck. If you try to work on big problems, it’s more overwhelming and gives you more hard stuff to deal with at once. The big problem is hard and getting unstuck is hard, so that’s at least two things. It’d be better to get unstuck with a small, easy problem that is unimportant (so the stakes are low and you don’t feel much pressure), so the only hard part is whatever you’re stuck on, and everything else provides minimal distraction. Though I think many of these people want to be distracted from the (irrational) reasons they’re stuck and failing to solve problems, rather than wanting to face and try to solve what’s going on there.

Small fixes also seem to hard if you imagine doing many small things using conscious effort and attention. To get lots of small things right, you must practice, automatize, and use your subconscious. If you aren’t doing that, you aren’t going to be very effective at small or big things. Most of your brainpower is in your subconscious.


See also my articles Ignoring “Small” Errors and “Small” Fraud by Tyson and Food Safety Net Services.


Elliot Temple | Permalink | Messages (0)

EA Judo and Economic Calculation

Effective Altruism (EA) claims to like criticism. They have a common tactic: they say thanks for this criticism. Our critics make us stronger. We have used this information to fix the weakness. So don’t lower your opinion of EA due to the criticism; raise it due to one more weakness being fixed.

The term “EA Judo” comes from Notes on Effective Altruism by Michael Nielsen which defines EA Judo as:

strong critique of any particular "most good" strategy improves EA, it doesn't discredit it

This approach doesn’t always work. Some criticisms are hard to fix. Some criticisms require large changes to fix that EAs don’t want to make, such as making large changes to which causes they put money towards.

EA could sometimes be criticized for not having figured out a criticism themselves sooner, which shows a lack of intellectual rigor, leadership, organization, effort, something … which is much harder to fix than addressing one concrete weakness.

They like criticisms like “X is 10% more important, relative to Y, than you realized” at which point they can advise people to donate slightly more to X which is an easy fix. But they don’t like criticism of their methodology or criticism of how actually one of their major causes is counter-productive and should be discontinued.

The same pretending-to-like-criticism technique was the response Ludwig von Mises got from the socialists 100 years ago.

Mises told them a flaw in socialism (the economic calculation problem). At first they thought they could fix it. They thanked him for helping make socialism better.

Their fixes were superficial and wrong. He explained that their fixes didn’t work, and also why his criticism was deeper and more fundamental than they had recognized.

So then, with no fixes in sight, they stopped speaking with him. Which brings us to today when socialists, progressives and others still don’t really engage with Mises or classical liberalism.

EA behaves the same way. When you make criticisms that are harder to deal with, you get ignored. Or you get thanked and engaged with in non-impactful ways, then nothing much changes.


Elliot Temple | Permalink | Messages (0)

What Is Intelligence?

Intelligence is a universal knowledge creation system. It uses the only known method of knowledge creation: evolution. Knowledge is information adapted to a purpose.

The replicators that evolve are called ideas. They are varied and selected too.

How are they selected? By criticism. Criticisms are themselves ideas which can be criticized.

To evaluate an idea and a criticism of it, you also need context including goals (or purposes or values or something else about what is good or bad). Context and goals are ideas too. They too can be replicated, varied, selected/criticized, improved, thought about, etc.

A criticism explains that an idea fails at a goal. An idea, in isolation from any purpose, cannot be evaluated effectively. There need to be success and failure criteria (the goal, and other results that aren’t the goal) in order to evaluate and criticize an idea. The same idea can work for one goal and fail for another goal. (Actually, approximately all ideas work for some goals and fail for others.)

How do I know this? I regard it as the best available theory given current knowledge. I don’t know a refutation of it and I don’t know of any viable alternative.

Interpretations of information (such as observations or sense data) are ideas too.

Emotions are ideas too but they’re often connected to preceding physiological states which can make things more complicated. They can also precede changes in physiological states.

We mostly use our ideas in a subconscious way. You can think of your brain like a huge factory and your conscious mind like one person who can go around and do jobs, inspect workstations, monitor employees, etc. But at any time, there’s a lot of work going on which he isn’t seeing. The conscious mind has limited attention and needs to delegate a ton of stuff to the subconscious after figuring out how to do it. This is done with e.g. practice to form habits – a habit means your subconscious is doing at least part of the work so it can seem (partly) automatic from the perspective of your consciousness.

Your conscious mind could also be thought of as a small group of people at the factory who often stick together but can split up. There are claims that we can think about up to roughly seven things at once, or keep up to seven separate things in active memory, or that we can do some genuine multi-tasking (meaning doing multiple things at once, instead of doing only one at a time but switching frequently).

Do to limited conscious attention and limited short-term memory, one of the main things we do is take a few ideas and combine them into one new idea. That’s called integration. We take what used to be several mental units and create a new, single mental unit which has a lot of the value of its components. But it’s just one thing so it costs less attention than the previous multiple things. By repeating this process, we can get advanced ideas.

If you combine four basic ideas, we can call the new idea a level 1 idea. Combine four level 1 ideas and you get a level 2 idea. Keep going and you can get a level 50 idea eventually. You can also combine ideas that aren’t from the same level, and you can combine different numbers of ideas.

This may create a pyramid structure with many more low level ideas than high level ideas. But it doesn’t necessarily have to. Say you have 10 level 0 ideas at the foundation. You can make 210 different combinations of four ideas from those original 10. You can also make 45 groups of two ideas, 120 groups of three, and 252 groups of five. (This assumes the order of the ideas don’t matter, and each idea can only be used once in a combination, or else you’d get even more combinations.)


Elliot Temple | Permalink | Messages (0)