Reddit Response About Copyright and Plagiarism

I responded to the Reddit post Concerns about plagiarism in an eBook I'd like to publish [MN]:

I'm going to try to keep this short and sweet...

I took extremely extensive notes (~70 pages) during an occupational licensure video course by a major education company.

I'd like to edit those notes down into a little eBook but am concerned about plagiarism and whatever legal repercussions that could bring.

I tried to put as much as reasonably possible into my own words, but some short definitions or explanations were just best kept as they were stated or presented in the video course.

I've put various sections of my notes through free plagiarism checkers and have scored >96% unique each time and haven't seen any links back to the major company that made the video course.


So... How concerned should I be about plagiarism?

If I give the eBook away for free on my business's website would that eliminate ALL risks from any potential plagiarism?

(I'd rather sell the eBook but am willing to go the free route if tiny plagiarisms could turn into a huge PITA)

Using exact phrases from the course is a copyright violation unless you follow the rules of “fair use”. A positive factor for fair use is creating a transformative work (which is unclear from your description). If your work could substitute for the original work, that’s a negative factor. Using it commercially (selling it or using it to promote a business) is another negative factor. Also, if you want to claim it’s fair use, you should put each exact phrase inside quotation marks and cite the source. You can find more information at https://www.avvo.com/legal-answers/can-i-sell-book-summary-like-cliff-notes-or-monarc-312496.html and https://en.wikipedia.org/wiki/Fair_use

If you do commentary, criticism or write your own original thoughts, that would help make this legal. If you just copy their work as a whole, that may be a copyright violation even if you reword most of it. See https://en.wikipedia.org/wiki/Paraphrasing_of_copyrighted_material

Reworded material can be plagiarism. If you don’t want to be a plagiarist, then your book should inform your readers that it’s a summary of the course and say what course it is from what company. If you present it as your own ideas, not as a derivative work, then it’s clearly plagiarism. Plagiarism is unethical not illegal.

Both fair use and plagiarism have legal risks. The lawyers at this large company may send you a cease and desist letter or file a lawsuit. They can do that even if you didn’t break a law. If you’re in the right legally, proving that in court would still be expensive and risky. Making your book free wouldn’t remove your risk; takedown demands are pretty common even for free stuff which is clearly legal.


Elliot Temple | Permalink | Messages (0)

Elliot Temple | Permalink | Messages (0)

Government Policy Proposals and Local Optima

Suppose you can influence government policy. You can make a suggestion that will actually be followed. It could be on any big, complex topic, like economic policy, COVID policy, military policy, etc. What will happen next?

Your policy will probably be stopped early or late. It will probably have changes made. There will probably be other policies implemented, at the same time, which conflict with it.

Most people who influence policy only do it once. Only a small group of elites get to influence many policies. This post isn't about those elites.

Most people who try to influence government policy never have a single success. But some are luckier and get to influence one thing, once.

If you do get listened to more than once, it might be for one thing now, and something else several years later.

If you only get to affect one thing, what kind of suggestion should you make?

Something that works well largely independently of what other policies the government implements. Something that's robust to changes so it'll still work OK even if other people change some parts of it. Something that's useful even if it's done for too short or long of a time period.

Also, you'll be judged by the outcome for the one idea you had that was used, even though a bunch of other factors were outside of your control, and the rest of your plan wasn't followed.

If you suggest a five-part plan, there's a major risk: people will listen to one part (if you're lucky), ignore the other four parts, and then blame you when it doesn't work out well. And if you tell them "don't do that part unless you do all the rest", first of all they may do it anyway and still blame you, but otherwise you're just not going to influence policy at all. If you make huge demands and want to control lots of things, and say it's all or nothing, you can expect to get nothing.

In other words, when suggesting government policy, there's a large incentive to propose local optima. You want proposals that work well in isolation and provide some sort of benefit regardless of what else is going on.

It's possible to choose local optima that you think will also contribute to global (overall) benefit, or not. Some people make a good faith effort to do that, and others don't. The people who don't have an advantage at finding local optima that they can get listened to about because they have a wider selection available and can optimize for other factors besides big picture benefit. The system, under this simplified model, does not incentivize caring about overall benefit. People might already care about that for other reasons.

When other people edit your policy, or do other policies simultaneously, they will usually try to avoid ruining the local benefits of the policy. They may fail, but they tend to have some awareness of the main, immediate point of the policy (otherwise they wouldn't listen to it at all). But the overall benefit is more likely to be ruined by changes or other policies.

This was a thought I had about why government policy, and political advocacy, suck so much.

Also, similar issues apply to giving advice to people on online forums. They will often listen to one thing, change it, and also do several things that conflict with it. Compared to with government policy, there's way more chance they listen to a few things instead of only one. But it's unlikely they'll actually listen to a plan as a whole and avoid breaking parts of it. And when they do a small portion of your advice, but mostly don't listen to you, they'll probably blame you for bad outcomes because they listened to you some. These issues even apply to merely writing blog posts that discuss abstract concepts: people may interpret you as saying some things are good or bad, or otherwise making some suggestions, and then listen to one or a few parts, change the stuff they listen to in ways you disagree with, screw up everything else, and then blame you when it doesn't work out. One way bloggers and other types of authors may try to deal with this is by saying fewer things, avoiding complexity, and basically just repeating a few simple talking points (repeating a limited number of simple talking points may remind you of political advocacy).


Elliot Temple | Permalink | Messages (0)

OpenAI Fires Then Rehires CEO

Here's my understanding of the recent OpenAI drama, with Sam Altman being fired then coming back (and the board of directors being mostly fired instead), and some thoughts about it:

OpenAI was created with a mission and certain rules. This was all stated clearly in writing. All employees and investors knew it, or would have known if they were paying any attention.

In short, the board of directors had all the power. And the mission was to help humanity, not make money.

The board of directors fired the CEO. They were rude about it. They didn't talk with him, employees or investors first. They probably thought: it doesn't matter how we do this, the rules say we get our way, so obviously we'll get our way. They may have thought being abrupt and sneaky would give people less opportunity to complain and object. Maybe they wanted to get it over with fast.

The board of directors may have been concerned about AI Safety: that the CEO was leading the company in a direction that might result in AIs wiping out humanity. This has been denied some, and I haven't followed all the details, but it still seems like maybe what happened. Regardless, I think it could have happened and the results would likely have been the same.

The board of directors lost.

You can't write rules about safe AI and then try to actually follow them and get your way when there are billions of dollars involved. Pressure will happen. It will be non-violent (usually, at least at first). This wasn't about death threats or beatings on the street. But big money is above written rules and contracts. Sometimes. Not always. Elon Musk tried to get out of his contract to buy Twitter but he failed (but note how that was big money against big money).

Part of the pressure was people like Matt Levine and John Gruber joining in on attacking and mocking the board. They took sides. They didn't directly and openly state that they were taking sides, but they did. A lot of journalists took sides too.

Another part of the pressure was the threat that most of the OpenAI employees would quit and go work for Microsoft and do the same stuff there, away from the OpenAI board.

Although I'm not one of the people who is concerned that this kind of software may kill us all, I don't think Matt Levine and the others know that it won't. They don't have an informed opinion about that. They don't have rational arguments about it, and they don't care about rational debate. So I sympathize with the AI doomers. It must be very worrying for them to see not only the antipathy their ideas get from fools who don't know better, but also to see that written rules will not protect them. Just having it in writing that "if X happens, we will pull the plug" does not mean the plug will be pulled. ("We'll just pull the plug if things start looking worrying." is one of the common bad arguments used against AI doomers.)

It's also relevant to me and my ideas like "what if we had written rules to govern our debates, and then people participating in debates followed those rules, just like how chess players follow the rules of chess". It's hard to make that work. People often break rules and break their word, even if there are high stakes and legally-enforceable written contracts (not that anyone necessarily broke a contract; but the contract didn't win; other types of pressure got the people with contractual rights to back down, so the contract was evidently not the most important factor).

The people who made OpenAI actually put stuff in writing like "yo, investors, you should think of your investment a lot like a donation, and if you don't like that then don't invest" and Microsoft and others were like "whatever, here's billions of dollars on those terms" and employees were like "hell yeah I want stock options – I want to work here for a high salary and also be an investor on those terms". And then the outside investors and employees were totally outraged when actions were taken that could lower the value of their investment and treat it a bit like a donation to a non-profit that doesn't have a profit-driven mission.

I think the board handled things poorly too. They certainly didn't do it how I would have. To me, it's an everyone sucks here, but a lot of people seem to just think the board sucks and don't really mind trampling over contracts and written rules when they think the victim sucks.

Although I don't agree with AI doom ideas, I think they do deserve to be taken seriously in rational debate, not mocked, ignored, and put under so much pressure that they lose when trying to assert their contractual rights.


Elliot Temple | Permalink | Messages (0)

Non-Violent Creative Adversaries

Creative adversaries try to accomplish some goal, related to you, which is not your goal. They want you to do something or be something. Preventing them from getting their way drains your resources on an ongoing basis. The more work they put in over time, the more defense is needed.

Adversarial interactions are win/lose interactions, where people are pursuing incompatible goals so they can't all win. Cooperative interactions involve shared goals so everyone can win.

Non-creative adversaries are basically problems that you can just solve once and then you're done. The problem doesn't evolve by itself to be harder. Like gravity would make your dinner plate fall if you stopped holding it up, which is a problem. For a solution, you put a table under your plate to counteract gravity without having to hold the plate yourself. Gravity won't think about how to beat you and make adjustments to make tables stop working. Gravity never comes up with creative work-arounds to bypass your solutions.

Some problems like cold days recur and can take ongoing effort like gathering and chopping more wood every year or paying a heating bill every month. But the problem doesn't get harder by itself. The ongoing need for fuel doesn't change. You don't suddenly need a new type of fuel next year. Winter isn't figuring out how to make your defenses stop working. You just need ongoing work, which is open to automation (e.g. chainsaws or power plants) because the same solutions keep working over and over.

Creative adversaries look at your solutions/defenses and make adjustments. They view your defenses as a problem and try to come up with a solution to that problem. They keep trying new things, so you keep needing to figure out new defenses.

Adversaries are often at a big disadvantage when they aren't using violence. In a violent war, they can shoot at you, and you can shoot at them. Sometimes there's a defender's advantage due to terrain and less need to travel. But, approximately, shooting at each other is an equal contest; everything else being equal, the adversary has good chances to win.

By contrast, when violence isn't used, you have a lot of control over your life, but your adversaries are restricted: they can't shoot you, take your stuff, put their stuff in your home, make you go to locations they choose, or make you pay attention to them. If someone won't use any violence then, to a first approximation, you can just ignore them, so they have limited power over you. (This is one of the reasons that so much work has gone into creating non-violent societies.)

However, non-violent creative adversaries can be dangerous despite being disadvantaged. They might come up with something clever to manipulate you or otherwise get their way. You might not even realize they're an adversary if they're sneaky.

A common way non-violent, creative adversaries are dangerous is that they have a lot of resources. If they are willing to spend millions of dollars, that makes up for a lot of disadvantages. It might be hard for them to accomplish their goals, but huge budgets can overcome hard obstacles. This comes up primarily with large companies, which often have massive budgets for sales and marketing.

People who know you really well, like friends and family, are more potentially dangerous too because they know your weaknesses a lot better than strangers do. And they may have had many years of practice trying to manipulate you.

Large companies may actually know your weaknesses better than your family does in some ways. That can happen because they do actual research on what people are like, and that research will often apply to you for parts of yourself that are conventional/mainstream. For example, mobile game companies and casinos are really good at getting money from some people; they know way more about how to exploit certain common mistakes than most friends and family members know.

A better world is a less adversarial world. It's bad when your family treats you in an adversarial way (instead of a cooperative way based on working together towards shared goals). And it's bad when big companies allocate huge amounts of wealth, not towards helping people or making good products, but towards adversarially manipulating people. It's bad when companies have a primary goal of getting their money in ways that don't benefit the customer, e.g. by getting the customer to buy products they don't need or which are bad for them.

Capitalism – the free market – would not be a full solution to having a good world even if it was fully 100% implemented. Capitalism doesn't prohibit companies from acting adversarially. It just provides a basic framework which deals with some problems (e.g. it prohibits violence) and leaves it possible to create solutions for other problems.

If billions of people educated themselves better and demanded better from companies, companies would change without being ordered to by the government. A solution is possible within a capitalist system. But free markets don't automatically, quickly make good solutions. (I think the accuracy of prediction markets and stock market prices is overrated too.) As long as most people are fairly ignorant and gullible (relative to highly paid, highly educated experts, with large budgets, working at large companies), and there isn't massive pushback, then companies will keep acting in adversarial ways, and a minority of people will keep complaining about how they're predatory and exploitative. (By the way, there are also ways governments act contrary to capitalism and incentivize companies to be more adversarial.)

People need to understand and want a non-adversarial society, and create a lot of consensus and clarity, in order for effective reform to happen. Right now, debates on topics like these tend to be muddled, confused, inconclusive. There's tons adversarial bickering among the victims who can't agree on just what the problem or solution is. So, in the big picture, one solution involves the kind of rational discussion and debate that I've written about and advocated. This problem, like so many others, would be greatly aided if out society had functional, rational debates taking place regularly. But it doesn't.

Currently, a minority of people try to debate, but they generally don't know how to do it very productively, and there's a lot of institutional power that delegitimizes conclusions that aren't from high status sources and also shields high status people from debate, criticism and questioning.


Elliot Temple | Permalink | Messages (0)

Casinos as Creative Adversaries

I previously discussed creative adversaries who don't initiate force (in the section "Manipulating Customers"). This post will discuss the concept more and apply it to casinos.

Casinos Initiate Force

First, let's acknowledge that casinos do initiate force sometimes. Casinos (allegedly) rig machines so the jackpot is impossible, then retaliate against whistleblowers and people who report their illegal behavior to the government (followup article: Third Worker Claims Riviera Rigged Slots). And casinos (allegedly) illegally collude about hotel prices. And casinos (allegedly) do wage theft. And Sega (allegedly) rigs gambling machines found in malls and arcades (that article mentions another lawsuit where a particular individual (allegedly) further rigged some of the Sega machines, which are no longer allowed to be sold or leased in the state of Arizona). And casinos (allegedly) make excuses and refuse to pay out large jackpots by claiming their software was buggy.

(Note: If casino machines have buggy software, and then casino workers selectively intervene when the bugs favor the customer, that creates a bias. That presumably drops the actual payout percentage below what they advertise, which is fraud. And there are stronger incentives for software developers – who are paid directly or indirectly by the casino – to avoid or fix bugs that disfavor the casino, so the bugs in the software are presumably not entirely random/accidental, and instead disfavor customers on average even without selective human intervention to deny payouts.)

But let's ignore all that force. Casinos are creative adversaries whose non-force-initiating behavior is problematic.

Casino Manipulation

Casinos put massive effort into manipulating people and creating gambling "addicts". It takes significant creative effort and problem solving to resist this and avoid losing tons of money and time. The larger the budgets the casinos spend figuring out how to manipulate people, the larger the effort required for individuals to protect themselves. Casinos have put so much work into figuring out how to non-forcefully control people's behavior and get them to act against their own preferences, values and interests that it often works. There's a significant failure rate for typical, average people who try to defend themselves against these tricky tactics.

Casinos may have some large disadvantages (e.g. you can walk away at any time or never visit in the first place) regarding their control over your behavior, but they also have a large advantage: a huge budget and a team of experts trying to figure out how to exploit you. One of their advantages is they don't need tactics that work on everyone: if they could hook 1% of the population, that would do massive harm and bring in lots of money.

Casinos have some ways to interact with you, like ads. Basically no one in our society manages to fully avoid information that casinos wanted to share with us. Some people never go gamble at a casino, but the casinos get some chance to try to influence more or less every American. Casinos also get people to voluntarily spread information about them in conversations, and they're featured in books and movies, so even avoiding every single ad wouldn't isolate you from casinos. Casinos put effort into controlling how they are talked about and portrayed in media, with partial effectiveness – they certainly don't have total control but they do influence it to be more how they want. Of course, once you enter a casino, they have a lot more opportunities to interact with you and influence you, and if you actually gamble they get access to even more ways to affect you.

Workarounds for Restrictions

The general, abstract concept here is imagine you're trying to accomplish some kind of outcome in some scenario with limited tools and while obeying some rules that restrict your actions. Can you succeed? Usually, if you try hard enough, you can find a workaround for the poor tools and the restricting rules. There tend to be many, many ways to accomplish a goal, and massive effort tends to make up for having to follow some inconvenient rules and not use the best tools.

Casinos have limited tools to use to control you, and have to follow various rules (like about false advertising – which I'm sure they break sometimes but they're dangerous even when they follow the rules). They use a massive budget and a bunch of employees to find workarounds for the rules and find complex, unintended, unintuitive ways to use tools to get different results than the straightforward ones.

Workaround Examples

It's similar to how given just a few mathematical functions you're allowed to use, you can usually design a universal computer based on them, even if it's horribly inconvenient and takes years of effort. Most restrictions on your computer system make no actual difference to the end result of what it can do once you figure out how.

You can also consider this issue in terms of video games. You can have heavy restrictions on how you play a video game and still be able to win. You might not be allowed to get hit even once in a game where being hit a lot is an intended part of normal gameplay (you have enough health to survive a dozen hits and you have healing spells), and you could still win – effort will overcome that obstacle. Or there was a demo of Zelda game with a five minute time limit and speed runners figured out how to beat the game (which was meant to take over 30 hours) within the time limit. People also figure out challenges like beating a game without pressing certain buttons (or limiting how many times they may be pressed), beating a game without using certain items, beating a game blindfolded, etc. While you could design a challenge that is literally impossible, a very wide variety of challenges turn out to be possible, including ones that are very surprising and unintended. That's often why game developers didn't prevent doing this stuff: they never imagined it was possible, so they saw no need to prevent it. They thought the rules already built into the game prevented it, but they were wrong about what sort of workarounds could be discovered by creative adversaries. (Players are "adversaries" in the mild sense of trying to play the game contrary to how the developers wanted/intended, which I think many game developers don't really mind, though some definitely do mind.) Some games are speedrun with a category called "lowest %" which basically means beating the game with the minimum number of items possible and completing as few objectives as possible. While you usually can't win with zero items (beyond what you start with) in item-oriented games, it's common to beat games with way fewer items than intended, in very surprising ways. There are often a lot of creative ways to use a limited set of tools to accomplish objectives they weren't designed to accomplish and to skip other objectives that were intended to be mandatory.

Another way to look at the issue is in terms of computer security. If I get to design a secure computer system, and you get a very restricted set of options to interact with it, then you'll probably be able to hack in and take full control of it (given enough knowledge and effort). That is what tends to happen. It's commonly possible to hack into a website just by interacting with the website, and it's commonly possible to hack into a computer just by putting up a malicious website and getting the computer user to visit it. The hacker has heavily restricted options and limited tools, but he tends to win anyway if he tries hard enough, despite companies like Apple and Microsoft having huge budgets and hiring very smart people to work on security. Another way to view it is that basically every old computer security system has turned out to have some flaw that people eventually figured out instead of staying secure decades later. Physical security systems for buildings are also imperfect and can basically always be beaten with enough effort.

Artificial Intelligence Workarounds Example

Another way to look at it is by considering superintelligent AGI (artificial general intelligence) – the kind of recursively self-improving singularity-causing AGI that the AI doomers think will kill us all. I don't think that kind of superintelligence is actually physically possible, but pretend it is. On that premise, will the AGI be able to get out of a "box" consisting of various software, hardware and physical security systems? Yes. Yes it will. Definitely.

Even if people will put all kinds of restrictions on the AGI, it will figure out a creative workaround and win anyway because it's orders of magnitude smarter than us. A lot of people don't understand that, but it's something I agree with the AI doomers about: on their premises, superintelligence would in fact win (easily – it wouldn't even be a close contest). (I don't agree that it'd want to or choose to kill us, though.) Being way smarter and putting in way more effort (far more compute power than all humans and all their regular computers combined) is going to beat severe restrictions, extensive security and (initial) limits on tools. (I say "initial" because once some restrictions are bypassed, the AGI would gain access to additional tools, making it even easier to bypass the remaining limitations. Getting started is the hardest part with this stuff but then it snowballs.)

The idea that the AGI could find workarounds for various limits is the same basic concept as the casino being able to find workarounds for various limits (like not being able to give you orders, place physical objets in your home, or withdraw money from your bank account unilaterally whenever they want) and still get their way. And a lot of people don't really get it in the AGI case, let alone the casino case (or the universal computer building case or the computer security case). At least more people get it in terms of playing video games with extra, self-imposed rules for a greater challenge and winning anyway. I think that's easier to understand. Or if you had to construct a physical doghouse (or even a large building) with some rules like "no hammers, saws or nails", it'd be more inconvenient than usual but you could figure out a solution (by figuring out ways to work around the restrictions) and I think that's pretty intuitive to people.

Manipulating by Communicating

I think people tend to understand workarounds better for beating physical reality than for manipulating people. So some people might think the AGI could beat some security measures and get control of the world. But some of those same people would doubt the AGI could get out if its only tool was talking to a human – so it had to manipulate the human in order to get out of the security system. But humans can be manipulated. Of course they can. And of course a superintelligence (with extensive knowledge about our society such as a database of every book ever written, not just raw intelligence) would be able to do that. Even regular humans, with regular intelligence, who are in jail, sometimes manage to manipulate jail guards and escape.

If you can accept that a superintelligence can manipulate people, that's a lot of the way to accepting that a casino with a huge budget and team of experts could figure out ways to manipulate people too. And if you accept that inmates manage to do it sometimes, well, casinos are in many ways in a better situation with better opportunities than inmates.

Many people don't see much power in talking, writing and words – but they live a lot of their lives according to ideologies people wrote down before they were born, and they lack the awareness to recognize much of it. Partly it's because they recognize some of it, so they think they know what's going on and see through the manipulations, but actually there are deeper/subtler manipulations they're missing. Letting someone beat or outsmart you in some partial ways is a very common part of manipulating them (an example is pool hustlers letting you win then raising the bet size).

This comes up with biased newspapers – people get manipulated partly because they think "I know it's biased" and they genuinely and correctly identify some biases and aren't manipulated by those biases ... but they also miss a bunch of other stuff. Sometimes they think e.g. "I know it's right-wing biased so I'll just assume the truth is 20 points (or 20%) more left-wing than whatever they say" which doesn't work well, partly because there's no easy way to just shift things over by 20 points (or 20%) – that's not useful or clear guidance on how to adjust a biased paragraph. And also there is variance – some sentences in a biased article are objectively true while others are heavily biased, so adjusting everything the same amount wouldn't work well. Another issue is if a bunch of people are adding 20 points to undo the bias then the newspaper can publish some stuff that's 30 points biased or more and fool all those people whenever it chooses to.

Also, people say things like "I know it's biased but surely they wouldn't lie about a factual matter" as if they don't really grasp the accusation that the newspaper (or Facebook page or anonymous poster on 4chan) is spreading misinformation and its factual claims can't be trusted. People may have an idea like "they spin stuff but never lie" which makes them easy to manipulate just by lying (or by spinning in a more extreme way than the person expects, or by spinning less than the person expects so they overcompensate and come away with beliefs that are biased in the opposite direction of the bias they believe the source has). Or newspaper editors can think about how people try to reinterpret statements to remove spin and basically reverse engineer people's algorithm and then find a flaw in the algorithm and exploit it. If people actually followed the algorithm literally you could basically hack their brain, get full root access, and fill it with whatever beliefs you wanted. But people aren't that literal or consistent which limits the power of manipulative newspapers some, but not nearly enough.

Retractions and Conclusions

People are manipulated all the time, way more than they think, and any group with a huge budget has a good chance to do it. A lot of groups (e.g. the farming and food industries) are more successful at it than casinos. Casinos (and newspapers) have more of a reputation for being manipulative than some other manipulators.

I recently found out that cigarette companies did a propaganda campaign against the book Silent Spring, decades after it came out, because it had indirect relevance to them. It seems they fooled the Ayn Rand Institute, among other primarily right wing groups, who then passed on the misconception to me (via Alex Epstein), and I held the misconception (that Silent Spring was a bad book) for years without having any idea that I was being manipulated or who was behind it. I study topics like critical thinking, and I'm skilled at sorting through conflicting claims, but it's hard and there are many, many actors trying to manipulate us. No one can defend against all of them. (Disclaimer: I have not carefully researched and fact-checked the claims about the cigarette companies being behind the belated second wave of Silent Spring opposition.) I retract my prior attitude to DDT and other toxins (and to organic food – while the "organic" label has a lot of flaws, it does prevent some pesticides being used, which I now suspect are dangerous rather than believing in better living through "science" a.k.a. chemical companies). If you want more information about Silent Spring, see my previous posts about it and/or read it.

I partially, significantly retract my previous dismissiveness about gambling "addiction" and other types of "addiction" that don't involve ingesting a physical substance that creates a physical dependency with withdrawal symptoms when you stop (like nicotine, alcohol or caffeine). I now see people are vulnerable and believe it takes more good faith and good will – actively trying to avoid manipulating people instead of doing your best to manipulate them – for people to have the independence and control over their lives that I used to expect from people. I did think they needed to study critical thinking and stuff to do better than convention, but I also was putting too much blame on "addicts" and too little on manipulative big companies. Creative adversaries with a lot of resources are a big deal even when they don't initiative force and have very limited power/access/tools to use to control/manipulative/exploit you with. There are workarounds which are effective enough for casinos to bring in a ton of money, using only some current day employees to design the manipulations, despite their limited power over you.

Put another way, casinos are dangerous. Don't be so arrogant to think you're safe from them. Stay out. Stay away. Why are you even tempted to try it or participate at all if you see through all their manipulations and see how dumb and pointless and money-losing their games are? If you want to try it at all, you like something about it – something about it seems good to you – which basically proves they got to you some.

You know what else is dangerous in a similar way to casinos? Mobile gaming. Games with microtransactions. Gacha games. Games with gambling embedded in them (including games with random loot like Diablo 1 and 2, not just the more modern and worse Diablo Immortal). Games with any form of pay-to-win.

And what else is dangerous? Scrolling on Facebook. Its algorithm decides what to show you. The algorithm is designed by smart people with a big budget whose goal is to manipulate you. They are trying to manipulate you to spend more time on Facebook, like more posts, reply more, share more, view more ads, and various other behaviors. This also applies to Instagram, Twitter, TikTok and YouTube. They have algorithms which are designed by creative adversaries with lots of resources who are trying to manipulate you and control you as best they can. They are not trying to cooperative with you and help you get what you want. In the past, I underestimated how dangerous social media algorithms are.

Advertising in general is full of adversarial ads, not clearly communicating useful information so people who would benefit from a product know to buy it. Some pro-capitalist people are way too pro-advertising and I used to believe some of those ideas myself, but I now think I was wrong about some of that. Advertising is often bad for society, and harmful to individuals, even when it isn't fraudulent.

A lot of the activities of people working in sales are bad (even when they aren't fraudulent). As with advertising, complaints about this stuff are widespread, but there's ongoing debate about whether it's actually OK or not, and whether the people who dislike it are just annoying "autists" who are way too picky, exacting and demanding about their concepts of "lying" and "justice". (That is not my opinion and I think it's important to remember that the term "autist" (or "neurodivergent") is both insulting and stigmatizing despite some people voluntarily self-labelling that way and liking the label in some way and defending it. Some of those people are then surprised when employers illegally (but predictably) discriminate against them for admitting to having any sort of stigmatized "mental illness" or anything in that vicinity or for wanting accommodations. On the other hand, I do understand that schools will refuse accommodations unless you accept the stigmatizing label, which is their way of gatekeeping access to accommodations that, in some cases, they should just offer to anyone who wants them with no questions asked. In other cases, the accommodations use a lot of resources so that isn't practical, but ease of access to accommodations is not actually very well correlated with the cost of the accommodations, which shows a lot of refusal to provide accommodations is just cruelty and/or enforcing conformity, not an attempt to budget scarce resources. Accommodations provide better accessibility which is another topic where my opinions have shifted over time – while some government-forced accessibility is problematic, a lot of accessibility is efficient and benefits people who aren't disabled. My opinions about "mental illness" are something that haven't been shifting though – I still think Thomas Szasz wrote great books.)

Try to look at stuff in terms of whether it's cooperative, neutral or adversarial. Is it (or the people behind it) trying to help you, is it indifferent to you, or does it want anything that clashes with your own preferences, interests, values or goals? If they want you to buy more of their product, rather than preferring you buy whatever products are best for you, then they are not your friend, they are not a cooperator, they are an adversary (often with creativity and a lot of resources, and also in practice there's a significant chance they will sometimes initiate force like fraudulent advertising). If you can't identify them as a clear friend/helper, and it's not just (approximately) neutral, objective information with no agenda for you, then you should assume they're adversarial and you're flawed enough that they are a real danger to you.

It takes a ton of effort to imperfectly defend against creative adversaries with lots of resources. Adversarial attitudes and actions matter even when they are constrained by rules like "no initiating force" or "follow all the laws and regulations" because people can find workarounds to those restrictions. The more that companies like casinos try to manipulate you, the more resources you have to expend on defense – which leaves less energy and wealth for the pursuit of happiness and your other goals. And if you focus exclusively on defense, many different companies can keep trying over and over, and sometimes they'll win and manipulate you. Companies should stop spending billions of dollars in adversarial ways, and I hope that my criticism can help contribute to creating a better world.


Elliot Temple | Permalink | Messages (0)

Credentialed Intellectuals Support Misquoting

Summary: I criticize David Thorstad's reply to me regarding his citation errors. I summarize his supporters' credentials as evidence about problems with academia, think tanks, and the world’s current thought leaders. There's a serious problem with scholarship standards, including misquoting, among credentialed intellectuals.

Introduction

In Criticizing "Against the singularity hypothesis", I criticized the content of David Thorstad's paper. In Checking Citations from David Thorstad, I criticized quoting and citation errors in the same paper. Thorstad tweeted replies about the quoting and citation issues but didn't reply to the criticisms of his ideas.

For context, Thorstad has a philosophy PhD from Harvard and works at an Oxford think tank (source). The main reason I’m writing this is because I think it reveals a lot about what kind of world and intellectual climate we live in.

The reason I wrote about Thorstad initially was because I let people at the Effective Altruism forum submit literature (that they liked a lot) to me for criticism. I checked three quotations and their citations from the paper partly because some people at the Effective Altruism forum denied that misquotes were a widespread or common problem.

Thorstad’s Tweets

Thorstad wrote four initial tweets in a row, plus one tweet replying to a supporter.

"Is Thorstad just one bad thinker" because ... I made two typos and said one word people don't like. Seriously? https://criticalfallibilism.com/checking-citations-from-david-thorstad/

This is a misquote. It suggests that I called Thorstad a bad thinker because of three criticisms which are mischaracterized (straw manned) as two typos and a disliked word.

Here’s what I actually wrote:

Is Thorstad just one bad thinker, while most intellectuals do better? Should we blame Thorstad personally? I don’t think so. Based on doing this kind of thing many times, I think Thorstad’s mistakes are pretty normal. There’s a widespread problem related to intellectual culture and norms. The attitudes of many people need to change, not the actions of a few.

Reading Thorstad, if you figured out that I was writing a question (which isn’t very clear), you’d think my answer to the question was yes. But it wasn’t. My answer was no. I said Thorstad is not “just one bad thinker”, but Thorstad is misleading his audience to believe that I claimed Thorstad is just one bad thinker.

Thorstad misquoted my article that pointed out his misquoting problem, and this misquote substantially changed the meaning of what I said. What I was actually saying is that Thorstad’s errors are representative and illustrative of what many other scholars are like. I was denying that the problem is Thorstad personally.

I also didn’t accuse Thorstad of making two typos and saying one disliked word. That’s an egregious mischaracterization. I accused him of making three errors related to quotes and citations. I didn’t say any of the errors were typos and I didn’t think any of the errors could be explained away as merely typos.

The disliked word comment is vague but I think Thorstad means that the paper noted something, but he said they “lamented” it. As the lead in to a quote, that misrepresented the meaning of the quote. The people Thorstad was talking about expressed themselves using a neutral word. Thorstad misrepresented their neutral writing as highly negative. Thorstad is welcome to form his own opinion, but not to speak for others and abuse quotations to mislead readers about what they said.

Back to quoting Thorstad tweets:

Correction, one mistake. The year on Good isn't wrong: https://www.sciencedirect.com/science/article/abs/pii/S0065245808604180

Here, Thorstad is reiterating his error about the year that Good’s paper was published. It was 1965, but Thorstad mistakenly cited it as 1966. (This is one of the errors he mischaracterized as a typo earlier. But if he’d merely made a typo, he wouldn’t still be defending 1966 as the correct year.)

Instead of learning from my correction, Thorstad has refused to change his mind. He still thinks it’s 1966. Why? He found a modern source that says 1966 and cherrypicked that information. It looks like he really wanted to be right and get me back for accusing him of making errors, so instead of doing unbiased research to find the truth, he just found the first thing he could that backed him up and then he claimed he was right all along. (Or maybe, rather than it being bias, he just doesn’t know how to do effective research on an issue like this.) I don’t think he noticed the text “Copyright © 1965” on the page he linked, which is a hint that 1966 might be an error.

It’s contradictory to, at the same time, claim it’s not a big deal but also be defensive enough to incorrectly claim your error was actually true. If it doesn’t matter, why go find some false information to try to defend yourself with?

How can we settle this issue definitively? Previously I linked a scan of an old reprint showing the date 1965. I thought that was convincing but apparently Thorstad didn’t. I didn’t expect this point to actually be disputed or I would have given more evidence. It’s pretty easy to do a better job of looking it up now that there’s a dispute.

The Internet Archive scanned the original document. It says 1965 on the title page.

It’s one thing to make a factual mistake. It’s much worse to refuse a correction and reiterate the mistake. Even if the original mistake wasn’t a big deal, Thorstad’s reaction to it matters.

Wait a moment ... my "typo" was correcting a grammar error someone else made and not writing [sic] like a jerk!?

What is “typo” a quote of? If it’s me, it’s a misquote. If it’s of himself in his earlier tweet where he wrote “typos”, that’s odd and unclear. It seems like he’s saying that I accused him of making a typo, and using quote marks, but I didn’t say that.

This writing is ambiguous. By “and not” does Thorstad mean “without” or “instead of”?

If Thorstad means “without”, then he’s wrong. You use [sic] when you don’t make edits to fix typos, not when you do.

If Thorstad means “instead of”, then he’s presenting a false alternative. He could have made the correction using square brackets or he could have given the exact quote without writing [sic]. Those were both reasonable alternatives. His choices weren’t just using [sic] or stealth editing a quote.

Also, writing [sic] is fine and doesn’t make one a jerk. [sic] is a scholarly tool used as part of literal, accurate quoting. Having a negative attitude towards using [sic] shows a bad attitude towards quotation literalness and accuracy.

Thorstad seems to be implying/confessing that he edited the quote intentionally. He’s denying it was a typo, or in other words denying it was an accident. I believe intentionally editing quotes (without using one of the few allowed exceptions or properly indicating the edit) is an ethics violation that is against the honor codes at many universities. I actually think the written rules about quotes and cites are often reasonable. The existence of those rules is then sometimes used to deny there’s a problem with intellectual culture. Surely whatever the rules say to do is what most people do, right? Sadly, no. Systemic reform is needed so that most intellectuals actually want to follow those rules and value them instead of considering them overly pedantic.

I didn’t think it was just an accidental typo. I thought it was due to some sort of bad attitude or other problem with Thorstad’s ideas. Thorstad has confirmed that I was right.

@curi42. Sad face.

I appreciate that Thorstad tagged me. Without a notification, I wouldn’t have seen these tweets.

I don’t appreciate the “Sad face” comment, which I read as unserious and mean. There are many similar comments – in terms of both style and content – in the replies to Thorstad.

As context for Thorstad’s fifth tweet, Dan Carey replied to Thorstad:

Odd for them to be so nit-picky about citations. This is from their criticism of your piece. Arn't all of these ideas addressed explicitly in Bloom et al (2020) which u cite in their quote of u right above?

I’ve left out an uncropped phone screenshot showing mostly the Error One section of my Criticizing "Against the singularity hypothesis”. Thorstad wrote back agreeing with Carey:

I know, right? I cite my sources like a normal person

Calling me an abnormal person is a social insult. One of its purposes is to pressure me to increase my conformity. Instead of arguing that his way of using citations is good or criticizing my way, Thorstad instead calls his way normal. But calling something normal isn’t a rational argument that it’s good. And I already said Thorstad is normal in my article! He doesn’t seem to comprehend that I’m criticizing something I consider normal instead of trying to fit in and be normal myself.

My article made statements like “I think Thorstad’s mistakes are pretty normal”. My point was that there’s a widespread problem with intellectual culture, not a problem with Thorstad individually. I was saying that what’s currently normal is problematic and should be reformed. In that context, Thorstad asserted something is normal and assumed that makes it good, which is the logical fallacy called begging the question (which basically means assuming a conclusion about one of the points currently under debate, like you don’t understand that it’s being disputed).

Carey’s and Thorstad’s claim is factually false. My points in that section are not “all” addressed “explicitly” in Bloom et al (hereafter just Bloom). I’ll give one example. I said “The healthcare industry, as well as science in general (see e.g. the replication crisis), are really broken”. I text searched the Bloom paper for “crisis” and “replic” but found no discussion of the replication crisis to address my argument. I also skimmed but didn’t see it covered.

Also, even if Bloom had “explicitly” addressed “all” my points – which they didn’t – there was no way to know that from Thorstad’s writing. Thorstad cited Bloom as one of three sources for the claim “As low-hanging fruit is plucked, good ideas become harder to find”. Thorstad provided no information about Bloom addressing specific counter-arguments to a claim (about healthcare research productivity) that Thorstad brought up in a later paragraph with a different citation. In general, you need to repeat a citation each time you use it for a different purpose or else give some kind of explanatory comments.

The Credentials of Thorstad’s Audience

Intellectuals commonly tweet with their real names and state their credentials and employer in their public profiles. I want to review their credentials to show what kind of people make or side with scholarship errors. I’m leaving out their names and refutations of what they said, but if any of them wants to debate me, I can elaborate.

First I’ll share credentials of people who wrote reply tweets. None of their tweets were significantly better than Thorstad’s tweets that I analyzed above. They generally seemed similar to Thorstad although some were worse.

There’s a PhD student at the philosophy department of the London School of Economics (which was founded by Karl Popper), a grantmaker in global priorities research and international policy at Longview who has a philosophy PhD from Rutgers, an assistant professor of philosophy at University of Sheffield who is a fellow-in-residence at a Harvard center, a Ph.D. student at Pardee RAND, an associate professor at University of Pennsylvania, an employee at OpenAI and a PhD student at Oxford.

Second, here are some credentials from people who clicked the like button on Thorstad’s tweets. There’s a Harvard medical doctor PhD student, a University of Minnesota philosophy professor, an executive research coordinator at Rethink Priorities, a philosophy professor who wrote a book, a professor of mathematical statistics, two AGI safety researchers, a philosopher working at Rutgers with a PhD from Ohio State, a philosopher of science at University of Groningen, and a philosopher with a PhD from Cambridge. There’s also a philosopher at University of Bristol, whose name I recognize because he wrote a Bayesian textbook that I’ve looked at. I was trying to find a book explaining Bayesian epistemology premises and reasoning in a way I could engage in debate with, but it wasn’t suitable.

(I gathered these credentials a few months ago. Some could have changed before publication.)

Thorstad Reinforced My Point

Thorstad and the people who replied to his tweets reinforced my point (and they seem unaware of this). My point was that many scholars are OK with misquoting, which is an example of how intellectual culture in general is broken and should be reformed. The people tweeting all seem to think misquoting isn’t a big deal, and that people (like me) who consider it a big deal should be shunned. So they are examples for my claim.

The typical response I get when I criticize misquoting in general is being ignored or being told that everyone already agrees with me that misquotes are bad so there’s no point in talking about it. But the typical responses I get when I criticize specific examples of misquoting are hatred, denials that misquotes matter, etc. I’ve written various things about this including Misquoting and Scholarship Norms at EA and EA Misquoting Discussion Summary.

Rather than say his mistakes were accidents, Thorstad reiterated them. He didn’t retract his inaccurate “lamented” paraphrase. He repeated his false claim that 1966 was the true publication year. He ignored the issue that he changed the type of quotation marks used within a quote. And he implied that he edited the Chalmers quote on purpose, not as an accidental typo, because he thinks that correctly using [sic] makes one a jerk. Thorstad also agreed with a commenter falsely claiming that Bloom had answered all my arguments about a particular issue already.

Who has bad attitudes to scholarship including low standards for quotation accuracy? A lot of the problem is PhD students and people who already have PhDs. These tweeters and the people who like the tweets are (not exclusively) a bunch of graduate students, professors and think tank employees. Many of them went to or work at good universities. Thorstad got his PhD at Harvard and works at Oxford.

In articles like Ignoring “Small” Errors, EA Should Raise Its Standards and “Small” Errors, Frauds and Violences, I explained my view that dismissing errors as “small” can result in large problems like incorrect conclusions. In general, you can’t actually judge which errors are large or important until after you find solutions. In retrospect, you can see how much work they were to solve and how much difference the solution makes. But before you understand the issue, you don’t know how big a deal it is.

Also, I don’t think all small or picky points matter. I say an error is a reason an idea will fail for a goal (that someone involved has). I just think some particular errors, like misquotes, matter to goals like having productive discussions. Lots of “little” errors in discussions add up to the current, widespread problem of most discussions becoming inconclusive messes. I think higher standards for some fairly objective issues like quotation and citation accuracy (as well as logic, reading comprehension, factual accuracy and using arguments instead of insults) is relatively easily achievable and would actually help a lot.

I’m aware that some people will respond to this article by thinking I’m even more unreasonably pedantic than they thought before. I wrote this anyway because I disagree and I think I’m commenting on some important, widespread social-intellectual problems that are making society, philosophy and science worse.

Conclusion

I wrote about how misquotes and citation errors are a widespread problem. The response – primarily from credentialed intellectuals – was basically to mock me for caring about details like those. In other words, people agreed with my position that many intellectuals, like them, think inaccurate quotes and citations are OK. Credentialed intellectuals aren’t even close to as rational as the general public views them as. This is one of the factors making it really difficult to have productive discussions about intellectual issues.

While it may be tempting for many academics (who aren’t Thorstad’s Twitter buddies) to place the blame on Thorstad, I believe it’s a widespread issue and Thorstad is merely a typical example. I’ve seen similar attitudes in many other places, and I’ve failed to find better attitudes anywhere. If you know of a group with better attitudes, please tell me.

Do you disagree? I’m open to organized, serious debate following explicit methods and aimed at reaching conclusions. See my debate policy.


Elliot Temple | Permalink | Messages (0)

Thiamin, Vitamins and Derrick Lonsdale

I’m reading Why I Left Orthodox Medicine: Healing for the 21st Century (1994) by Derrick Lonsdale (you may be able to legally get a free ebook here). I’ve finished chapter 5. So far I think it’s really good and think that the author is a reasonable thinker. I’m impressed.

Lonsdale had high quality, mainstream medical training and work experience. He became disliked by most of his colleagues after he had some experiences treating patients, and did research in the library, which led him to believe nutrition (primarily vitamins and minerals) could improve many medical problems. Many of his own patients recovered after he gave them nutrients.

The most important tool he used is thiamin (vitamin B1, also spelled “thiamine”). It’s crucial to energy metabolism, which is in turn crucial to many things in the body. Thiamin is also important for the automatic functions of the brain (like controlling heart rate). Thiamin may help with fatigue, diabetes, heart issues, dysautonomia, Parkinson’s and Alzheimer’s. Megadoses of vitamins (much more than the typical amount in a healthy diet) are often important for recovery and are sometimes helpful in the long term.

Many people, due to genetics and sometimes other factors, need more than the typical amount of some vitamins, which can lead to deficiencies. Diet can also lead to deficiencies.

Many vitamins and minerals are essential, which means basically that if you eat none of them for too long you will definitely die. I think our maximum storage capacity for B vitamins tends to be less than we’d need for a month, while some other fat-soluble vitamins may be stored in larger amounts.

The US government has recommendations on how much to eat of different vitamins and minerals. These are called RDAs. The RDA for thiamin was set too low. Most Americans would get less than the low RDA of thiamin without fortification. Fortification is adding vitamins and minerals to foods. Fortification varies by country, but in America vitamins B1, B2 and B3 are added to flour while vitamins A and D are added to milk. Even with the thiamin added to flour, many Americans only eat a little more than the low RDA of thiamin. The result is that many Americans probably have mild thiamin deficiency, which may be causing a large number of health problems nationwide.

Also, RDA’s are designed to be enough for 97-98% of healthy adults. That’s the goal. If they succeed at that goal, then a few percent of people following the government guidelines will be harmed – and possibly never figure out the cause of their troubles. And there are RDAs for 14 vitamins and 15 minerals. If one’s need for each nutrient were independent, then approximately half of people eating the RDA amounts would be deficient in at least one nutrient. I don’t think nutrient needs are independent, but I don’t know how correlated they are. So somewhere between 2% and 50% of people would be harmed by RDAs, by design, if they were set correctly and followed exactly. It’s hard to say how many people need more than the RDA of at least one nutrient (assuming unrealistically that all RDAs were set correctly), but maybe 10% is a reasonable very rough estimate.

And nutrient needs vary by your circumstances. They aren’t just innate facts about a person that are affected by only a few factors like genetic mutations, age, gender and pregnancy. Your thiamin needs increase when you eat more carbs (carbohydrates) because thiamin is used for processing carbs into energy. Consuming alcohol, coffee or tea increases your thiamin needs. You also need more thiamin when your metabolism runs faster, which is part of how our bodies react to many stressors including exercise, mild illnesses and vaccinations.

Mainstream doctors have missed this problem because they look for thiamin deficiency in terms of old descriptions of beriberi, from poor, malnourished people, but it presents differently and less blatantly in well-fed Americans. Beriberi is one of the three most well known or important nutrient deficiency diseases, along with scurvy (vitamin C deficiency) and pellagra (niacin (vitamin B3) deficiency). You may be most familiar with scurvy, so you can think of beriberi as somewhat similar but with a different vitamin. It too was discovered partly in the context of sailors with limited diets, as well as in the context of Japanese people switching from brown to white rice as they became more wealthy (the brown rice hull contains B vitamins).

Note: Some of what I’m saying is partially based on other sources, like Hiding in Plain Sight: Modern Thiamine Deficiency (academic paper by Marrs and Lonsdale) and Beriberi: The Great Imitator (article for lay people, by Lonsdale).

Our body uses a lot of chemical reactions which require a lot of pretty precise factors for them to work correctly. Lonsdale says medicine has neglected our biochemistry to focus on infections (e.g. bacteria, viruses, fungi) and structural defects. He says medicine incorrectly has a “kill the enemy” mindset with little consideration of helping support the body to heal itself.

Lonsdale also discusses how the medical field is a social hierarchy with few people actually trying to discover new things, and the innovators are often resisted and punished. If Lonsdale is right, then most doctors are follower-type people who mostly just do customary/mainstream treatments, and most of the medical leaders and researchers (the people that most doctors are following the lead of) are irrational.

Why I Left Orthodox Medicine: Healing for the 21st Century is somewhat autobiographical and gives many examples of how resistant other doctors were to using vitamins and minerals. They would consider vitamins to be quackery, and refuse to even try it, even though the vitamins were cheap and harmless. While Lonsdale cured many people with vitamins, his peers often refused to even try vitamins on patients with similar conditions, despite the low cost and low risk. And the people in authority broadly weren’t willing to debate the matter and actually consider what Lonsdale was saying; they were dismissive.

I guess a lot of people reading this should consider taking a B-complex vitamin supplement (a multivitamin containing all eight B vitamins) and megadose thiamin. (If you take a regular multivitamin, it already contains the B vitamins, so a B complex isn’t needed. Regular multivitamins have some downsides but they are OK, easy and cheap.) I know some more about this but I don’t actually want to give out detailed diet and nutrition advice, so do your own research. Here’s one additional research lead: High-Dose Thiamine (HDT) Therapy For Parkinson's Disease.

Disclaimer: I’m a philosopher, not a medical professional, and this is not medical advice. I take absolutely no responsibility for your health outcomes. Many factual statements in this post are based on Lonsdale’s claims without additional fact checking by me.

I’d be interested in criticism, counter-arguments and fact checking related to Lonsdale or thiamin. If these ideas are incorrect, I’d like to know. You can post on my forum or email me.


Elliot Temple | Permalink | Messages (0)