Well-Kept Gardens Die By Pacifism by Eliezer Yudkowsky:
Good online communities die primarily by refusing to defend themselves.
Somewhere in the vastness of the Internet, it is happening even now. It was once a well-kept garden of intelligent discussion, where knowledgeable and interested folk came, attracted by the high quality of speech they saw ongoing. But into this garden comes a fool, and the level of discussion drops a little—or more than a little, if the fool is very prolific in their posting. (It is worse if the fool is just articulate enough that the former inhabitants of the garden feel obliged to respond, and correct misapprehensions—for then the fool dominates conversations.)
And what if you’re wrong about who the fools are?
What if you’re biased?
Where are the error correction mechanisms? Where are the objective tests to check that you aren’t screwing up?
Where’s the transparency? The rule of law, not rule of man? Where are the checks and balances?
If all you’ve got (as the moderator) is “I trust my judgment” then you’re just like everyone else, including the fool.
If you add some methods to try to have good judgment and try not to be biased … that’s not enough. Trying really hard, but ultimately trusting yourself and just going by what makes sense to you … is not a good enough defense against bias. Bias is powerful enough to beat that.
If fundamentally you’re just assuming your biases are pretty small and manageable, you are a fool. We are all alike in our infinite ignorance. We’re at the beginning of infinity. We’re so lost and confused in so many ways. We have big errors. Really big errors. Some are society-wide. And when we suppress everything that seems really quite wrong to us, what we’re doing is suppressing outliers. Yeah negative outliers are more common than positive outliers. It’s easier to be wrong than right. But how do you avoid suppressing positive outliers?
There are mechanisms you can put in place to make it harder to act on whim, bias, irrationality, etc.
E.g. you can explain why you take moderator actions. And you can field questions about your actions and reply to criticism. You could reply to all such things, plus followups. If you won’t do that, that’s a cutoff where you’re blocking error correction. And what have you done to prevent this cutoff from protecting the biases you may have?
Yes defense mechanisms are needed. But why can’t they be efficient and reusable ways to address the arguments of everyone, including fools? And a resilient forum where people think for themselves about what to focus attention on. If people want curation, fine, no problem, post curated selections somewhere – and leave the open forum also in existence. Post weekly favorites or whatever for the readers who don’t want to find the good stuff themselves. The curated view on the forum doesn’t have to be the only view. You can have an open public space, and a walled garden, both. Or dozens of competing walled gardens with different curators (thought only a few would probably have the vast majority of the popularity). But that’s dangerous. The curators may be biased. They may curate mostly by social status, for example. They may not know they do that. They may not understand some of their own systematic biases.
You have to fundamentally stop assuming you’re right or probably right and take seriously that you need error correction mechanisms to keep yourself honest. You can’t trust your own integrity. Don’t bet your life or your forum on your own integrity.
Scientists don’t bet science on their own integrity. Integrity helps. Try to have it. But science isn’t like “ok well do your best with the experiment and if you have integrity it should work out ok”. Instead experiments are designed to work out ok even if the experimenters can’t be trusted. The mechanisms don’t have unlimited resilience. Egregious scientific fraud has to get caught by outsiders. The scientific method makes that easier to catch. It’s harder to fake your experiments when there are procedures to follow, documentation to provide, etc. Most people are trying to cheat without full conscious intention though, so that’s easier to deal with. Having a little bit of integrity is really helpful compared to none. And anti-bias mechanisms with transparency and stuff do put a leash on the bad faith cheaters.
Double blind makes it harder to cheat even if you want to. You can be really biased, and your bias can control you, but if you follow the rules of double blind then it’s much harder for you to bias the results.
Control groups don’t care if you’re biased. That’s a system which is hard to cheat without extreme blatantness like skipping the control group entirely and using fabricated data. And it’s hard to do that because of transparency. And even if you get away with it, your results won’t replicate.
You’re expected to write about sources of error. If you don’t, people will write them for you and be more wary of your claims. If you do, people will consider how severe the issues are. If you write them but try to bias them, you’ll have a harder time convincing people who don’t share your biases. And when you leave stuff out, people can notice and then it’s harder for you to answer critics by claiming you totally knew about that and took it into account already.
Even when everyone shares biases, methods like “make a hypotheses. write it down. plan an experiment to test it. write down what results will agree with or disagree with the hypotheses. test it. compare the results to the predictions.” are capable of correcting everyone. That kind of method makes it way harder to fool yourself. If you skip steps like writing things down as you go along, then it’s much easier to fool yourself.
Forums need the same thing. Write down rules in advance. Make the rules totally predictable so people can know in advance what violates them or not. Don’t rely on potentially-biased judgment.
And when you must use judgment at a forum, be like a damn scientist, publish results, talk about sources of error, answer your critics, answer all the questions and doubts, etc. Take that seriously. Discuss the matter to conclusion. At least don’t be and stay wrong if anyone in your community knows you’re wrong and will explain it.
If you can’t resolve discussions, fix that and consider that maybe you’re a bit of a fool. If you don’t know how to get to conclusions with your critics, or manage those discussions without just ignoring arguments without answering, you need better systems and shouldn’t be so high and mighty to proclaim who is a fool.
Ideally, let the fool defend himself, too, Don’t just let others defend him. Topic-limit him during that discussion if you must so he can’t participate in other discussions until it’s resolved.
Also in the article, EY says academia is a walled garden that keeps the fools out, so that’s why ppl are naive and don’t realize they need censors. And I’m like: Yeah that is exactly what academia is and it’s fucking awful there. And academia’s walls are 99% based on social status.
What is your forum doing to prevent social status bias from deciding who the fools are? What explicit policies does it have that could actually work in case that you, the moderators, are social status biased?
EY’s answer is basically “if the mods suck, the forum is fucked”. Just find other, better people to rule. What an awful answer. Seriously that’s his position:
Any community that really needs to question its moderators, that really seriously has abusive moderators, is probably not worth saving.
No! You need good systems, not sinless, unbiased moderators (nor mediocre moderators who aren’t all that bad and you just put up with their errors). It’s like: dear God we’re not going to get unbiased politicians; we need a government system that works anyway. Forums are the same thing. Write out laws in advance. Make new laws with a process. Anyone who doesn’t violate a law in a crystal clear way gets away with it. etc. Otherwise the moderators will have some sort of biases – doesn’t everyone? – and they’re going to oppress the people with other biases who are such a source of intellectual diversity. Broaden your horizons instead of getting rid of all your dissidents and misfits and outliers. God damn you, “overcoming bias” you say?
We know a lot about how to deal with bias, abuse, unfairness, etc. from our legal system and from science. Then people don’t apply those lessons when they have power over a forum.
I have seen rationalist communities die because they trusted their moderators too little.
…
Here—you must trust yourselves.
You don’t overcome your biases by trusting yourself.
Don’t be a skeptic. Don’t never reach any conclusions or judgments. Don’t be scared to act in your life. But don’t trust your moderation to be unbiased. Have layers and layers of safety valves that provide different opportunities for you to be corrected if you’re wrong or biased. Never trust yourself such that you don’t think you need any mechanisms to keep you honest.
The scientific method makes it harder to be wrong and stay wrong. We need stuff like that for running forums too.
The scientific method does not consider it adequate for scientists to read about how to be good scientists, think about it, discuss it, do their best, and trust themselves. That would be ridiculous.
And the laws have lots of protection against the errors and biases of the individuals who enforce them. E.g. police, judge, jury and executioner are all different people. At forums they’re usually all the same people, which means you need even more safety mechanisms of other types. And we write out clear, objective laws in advance – no retroactive crimes. And there are appeals. And defendants have rights and privileges that the judges have to respect, and that really is enforced in various ways. And we try to limit the discretion of judges to be biased or play favorites by making the law really clear about what they should do. We don’t do that perfectly but we try and it helps. And when we make laws, they are (or at least should be) pretty generic and based on general principles instead of targeting specific individuals or small groups – we want our laws to be relevant for future generations and different societal situations, rather than overly specific to deal some current events.
Messages (20)
I didn't post this on Less Wrong. I don't think they'll like it even if I edit out the profanity and make the negativity towards EY more polite and limited.
I wrote this on 2020-08-16.
> And academia’s walls are 99% based on social status.
Based on what? What is your evidence/argument for this?
The implications if this is true would be the equivalent of the West being no more worthy of admiration than a reality celebrity show. With Trump being the president, I am sympathetic to this view, but I have not given up on the West just yet.
#17531 Do you have a alternative claim? Where's your evidence for it?
I didn't make a claim. The burden is not on me.
However, I would say the counter evidence is self evident. Unless you're the kind of mouthbreather with a rose on your bio who thinks the West is worse off then other places. In that case I have no time for you.
#17540 You're trolling. Please leave.
Interpretation of the above as tree diagram
https://www.is-this-normal.net/public/Forum%20Moderation%20Needs%20Anti-Bias%20Policies.png
#17547 Cool. But would you repost or update the tree with an extra node with author, date and a link to the blog post it's about? That'll make it work as a standalone file that someone could look at in isolation. Then I can include it in my example tree collection (if you don't mind).
#17542 For context: He was also posting in other threads at the same time, calling people fools, virgins and more.
#17549 And I don't mind if he comes back later (after at least a few days) if he stops trolling.
All good.
I tried to make the updates you requested. Let me know if you want any others, or if I misunderstood the initial request.
I don't mind if you use it as an example tree.
#17551 great, thx
#17542 I was not trolling. I was perhaps being ornery but not in a dishonest manner.
If you wish to ban me from participating here that's understandable. I am abrasive. But you should know I am not a troll, and urge reconsideration.
I wish to respond to other threads about other stuff but I will refrain until you give me a definite green light to continue or a definite red light and this will be my last post here.
FWIW I enjoyed participating here and I had no quarrel with you. I've digested a ton of your content, I appreciate what you write and how you write it. Goodbye or Talk to you later depending on your decision.
#17555 Those weren't intended as moderator actions by me. I try to label moderator actions.
Rule of law and rule of man
I agree with the general case for rule of law rather than rule of man.
One problem I've noticed with rule of law is:
- Lawmakers make laws, initially reasonably clear.
- Whether through ignorance or malicious intent, people find loopholes - ways around the intent of the law while still complying with the letter.
- Lawmakers add to the laws to try to close the loopholes.
- The attempts to close loopholes sometimes create more / different loopholes. Even if they don't, people find new ways to comply with the letter but not the spirit.
- Additionally, people with vested interests in violating the spirit are incentivized to lobby lawmakers for wording that grants them the loophole while denying it to others/competitors. While of course making arguments that the way they're lobbying for is actually best.
- The process above repeats with no end. Maybe the laws get better over time in the sense of more explicitly specifying prohibited behaviors. But they're also typically worse in terms of length, complexity, arcane but nevertheless arbitrary favoritism, and understandability by those who genuinely wish to comply.
Does anyone know a good solution to this problem?
#17567 For most laws, you can't and shouldn't try to make it hyper-literal ... like so you could just enter the data in some present day software and have it tell you the right outcome. You do need judges and intelligent, reasonable people. You can make things way MORE objective and REASONABLY clear but the goal of precisely dealing with every edge case is not gonna work.
So write the laws with the level of precision that you can actually handle. But don't make a bunch of precise rules that don't do what you want. Write what you actually mean even if it's visibly going to require a little judgment from a judge or citizen. Minimize the judgment needed and look for ways to base your rules on stuff people are good at being objective about, like measurements and math.
Also, regulate way less stuff. I think the problems you're talking about come up more with laws that shouldn't exist in the first place – which makes them a lot harder to interpret because the intent is unreasonable or infeasible – than with laws like "don't murder people".
Always keep the wording of laws simple. If you can't explain the law in simple terms, consider not making that law. Adding a bunch of clarifications just makes messes. If you need to fix it, rewrite it better instead of tacking on crap.
Objectivity is never going to be perfect, and our laws aren't going to be mathematically precise, but we can do pretty well at objectivity and differentiate cases where we do that from stuff that does objectivity badly. (It's the same in science or philosophy btw – no perfect objectivity but doing a good job makes a big difference over doing a bad job, and the cases are often pretty polarized into good or bad cases instead of like a bell curve of medium objectivity.)
If you want a more specific answer, you could bring up some example cases where you'd have trouble writing a good law (or a discussion forum rule example is fine too).
I think Yudkowsky believes that "malicious intent" is "easily distinguishable from constructive feedback":
https://twitter.com/devonzuegel/status/1297616131612708864
#17597 Excellent advice by your friend.
I do think it is usually very easy to tell.
And when it is not easy to tell, it still is safe to disengage anyway, why? because if the other person acts malicious, then any further engagement would be a hassle anyway. (and if you're an autist that can't tell the difference then it is not my responsibility to teach you anyway)
Life is too short to mud wrestle with pigs or crytoPigs! (one of the main reasons I stay away from all social media apps) Imma die soon, let the Twitter trolls fight it out and leave me out of it!
#17605 https://curi.us/2288-identification-policy
extra context: he posted on LW (on a fresh account) to smear me by accusing me of sock puppeting (and being autistic).
Last Post Ever.
#17609 I don't consider that a smear. I did not mean to trigger you or offend you. Being autistic is nothing to be ashamed about, and I don't consider it a handicap or w/e. I think there's just different ways of being and yours makes you like I said before, have the mind of a jet engine. It comes at a cost. Everything has trade offs.
I was not smearing you but defending you as a good faith actor. Who was genuinely looking to make progress with those conversations. I cannot believe you would read what I wrote as "smears" and not as a positive defense.
I did not know you were this sensitive, had I known that, I would not have said anything that could have hurt your feelings. Not my intention. I will refrain from participating in your blog and from making comments about you on LW. I think we are just too different, and have way different frameworks that it seems I can't help but insult you, or seem like a troll to you.
It was nice talking to you and hope you develop your ideas for rational conversations further. I think you're onto something. Good bye.
> Interpretation of the above as tree diagram
> https://www.is-this-normal.net/public/Forum%20Moderation%20Needs%20Anti-Bias%20Policies.png
I read this on stream https://youtu.be/Z3KZKoUOxrY