[Previous] Where Can I Start Confidently? | Home | [Next] Discussion Methodologies

Project Planning Discussion

This is a discussion about rational project planning. The major theme is that people should consider what their project premises are. What claims are they betting their project success on the correctness of? And why? This matter requires investigation and consideration, not just ignoring it.

By project I mean merely a goal-directed activity. It can be, but doesn't have to be, a business project or multi-person project. My primary focus is on larger projects, e.g. projects that take more than one day to finish.

The first part is discussion context. You may want to skip to the second part where I write an article/monologue with no one else talking. It explains a lot of important stuff IMO.


Gavin Palmer:

The most important problem is The Human Resource Problem. All other problems depend on the human resource problem. The Human Resource Problem consists of a set of smaller problems that are related. An important problem within that set is the communication problem: an inability to communicate. I classify that problem as a problem related to information technology and/or process. If people can obtain and maintain a state of mind which allows communication, then there are other problems within that set related to problems faced by any organization. Every organization is faced with problems related to hiring, firing, promotion, and demotion.

So every person encounters this problem. It is a universal problem. It will exist so long as there are humans. We each have the opportunity to recognize and remember this important problem in order to discover and implement processes and tools which can facilitate our ability to solve every problem which is solvable.

curi:

you haven't explained what the human resource problem is, like what things go in that category

Gavin Palmer:

The thought I originally had long ago - was that there are people willing and able to solve our big problems. We just don't have a sufficient mechanism for finding and organizing those people. But I have discovered that this general problem is related to ideas within any organization. The general problem is related to ideas within a company, a government, and even those encountered by each individual mind. The task of recruiting, hiring, firing, promoting, and demoting ideas can occur on multiple levels.

curi:

so you mean it like HR in companies? that strikes me as a much more minor problem than how rationality works.

Gavin Palmer:

If you want to end world hunger it's an HR problem.

curi:

it's many things including a rationality problem

curi:

and a free trade problem and a governance problem and a peace problem

curi:

all of which require rationality, which is why rationality is central

Gavin Palmer:

How much time have you actually put into trying to understand world hunger and the ways it could end?

Gavin Palmer:

How much time have you actually put into building anything? What's your best accomplishment as a human being?

curi:

are you mad?

GISTE:

so to summarize the discussion that Gavin started. Gavin described what he sees as the most important problem (the HR problem), where all other problems depend on it. curi disagreed by saying that how rationality works is a more important problem than the HR problem, and he gave reasons for it. Gavin disagreed by saying that for the goal of ending world hunger, the most important problem is the HR problem -- and he did not address curi's reasons. curi disagreed by saying that the goal of ending world hunger is many problems, all of which require rationality, making rationality the most important problem. Then Gavin asked curi about how much time he has spent on the world hunger problem and asked if he built anything and what his best accomplishments are. Gavin's response does not seem to connect to any of the previous discussion, as far as I can tell. So it's offtopic to the topic of what is the most important problem for the goal of ending world hunger. Maybe Gavin thinks it is on topic, but he didn't say why he thinks so. I guess that curi also noticed the offtopic thing, and that he guessed that Gavin is mad. then curi asked Gavin "are you mad?" as a way to try to address a bottleneck to this discussion. @Gavin Palmer is this how you view how the discussion went or do you have some differences from my view? if there are differences, then we could talk about those, which would serve to help us all get on the same page. And then that would help serve the purpose of reaching mutual understanding and agreement regarding whether or not the HR problem is the most important problem on which all other problems depend.

GISTE:

btw i think Gavin's topic is important. as i see it, it's goal is to figure out the relationships between various problems, to figure out which is the most important. i think that's important because it would serve the purpose of helping one figure out which problems to prioritize.

Gavin Palmer:

Here is a google doc linked to a 1-on-1 I had with GISTE (he gave me permission to share). I did get a little angry and was anxious about returning here today. I'm glad to see @curi did not get offended by my questions and asked a question. I am seeing the response after I had the conversation with GISTE. Thank you for your time.

https://docs.google.com/document/d/1XEztqEHLBAJ39HQlueKX3L4rVEGiZ4GEfBJUyXEgVNA/edit?usp=sharing

GISTE:

to be clear, regarding the 1 on 1 discussion linked above, whatever i said about curi are my interpretations. don't treat me as an authority on what curi thinks.

GISTE:

also, don't judge curi by my ideas/actions. that would be unfair to him. (also unfair to me)

JustinCEO:

Curi's response tells me he does not know how to solve world hunger.

JustinCEO:

Unclear to me how that judgment was arrived at

JustinCEO:

I'm reading

JustinCEO:

Lowercase c for curi btw

JustinCEO:

But I have thought about government, free trade, and peace very much. These aren't a root problem related to world hunger.

JustinCEO:

curi actually brought those up as examples of things that require rationality

JustinCEO:

And said that rationality was central

JustinCEO:

But you don't mention rationality in your statement of disagreement

JustinCEO:

You mention the examples but not the unifying theme

JustinCEO:

GISTE:

curi did not say those are root problems.

JustinCEO:

Ya 🙂

JustinCEO:

Ya GISTE got this point

JustinCEO:

I'm on phone so I'm pasting less than I might otherwise

JustinCEO:

another way to think about the world hunger problem is this: what are the bottlenecks to solving it? first name them, before trying to figure out which one is like the most systemic one.

JustinCEO:

I think the problem itself could benefit from a clear statement

GISTE:

That clear statement would include causes of (world) hunger. Right ? @JustinCEO

JustinCEO:

I mean a detailed statement would get into that issue some GISTE cuz like

JustinCEO:

You'd need to figure out what counts and what doesn't as an example of world hunger

JustinCEO:

What is in the class of world hunger and what is outside of it

JustinCEO:

And that involves getting into specific causes

JustinCEO:

Like presumably "I live in a first world country and have 20k in the bank but forgot to buy groceries this week and am hungry now" is excluded from most people's definitions of world hunger

JustinCEO:

I think hunger is basically a solved problem in western liberal capitalist democracies

JustinCEO:

People fake the truth of this by making up concepts called "food insecurity" that involve criteria like "occasionally worries about paying for groceries" and calling that part of a hunger issue

JustinCEO:

Thinking about it quickly, I kinda doubt there is a "world hunger" problem per se

GISTE:

yeah before you replied to my last comment, i immediately thought of people who choose to be hungry, like anorexic people. and i think people who talk about world hunger are not including those situations.

JustinCEO:

There's totally a Venezuela hunger problem or a Zimbabwe hunger problem tho

JustinCEO:

But not really an Ohio or Kansas hunger problem

JustinCEO:

Gavin

I try to be pragmatic. If your solution depends on people being rational, then the solution probably will not work. My solution does depend on rational people, but the number of rational people needed is very small

GISTE:

There was one last comment by me that did not get included in the one on one discussion. Here it is. “so, you only want people on your team that already did a bunch of work to solve world hunger? i thought you wanted rational people, not necessarily people that already did a bunch of work to solve world hunger.”

JustinCEO:

What you think being rational is and what it involves could probably benefit from some clarification.

Anyways I think society mostly works to the extent people are somewhat rational in a given context.

JustinCEO:

I regard violent crime for the purpose of stealing property as irrational

JustinCEO:

For example

JustinCEO:

Most people agree

JustinCEO:

So I can form a plan to walk down my block with my iPhone and not get robbed, and this plan largely depends on the rationality of other people

JustinCEO:

Not everyone agrees with my perspective

JustinCEO:

The cop car from the local precinct that is generally parked at the corner is also part of my plan

JustinCEO:

But my plan largely depends on the rationality of other people

JustinCEO:

If 10% or even 5% of people had a pro property crime perspective, the police could not really handle that and I would have to change my plans

Gavin Palmer:

World hunger is just an example of a big problem which depends on information technology related to the human resource problem. My hope is that people interested in any big problem could come to realize that information technology related to the human resource problem is part of the solution to the big problem they are interested in as well as other big problems.

Gavin Palmer:

So maybe "rationality" is related to what I call "information technology".

JustinCEO:

the rationality requirements of my walking outside with phone plan are modest. i can't plan to e.g. live in a society i would consider more moral and just (where e.g. a big chunk of my earnings aren't confiscated and wasted) cuz there's not enough people in the world who agree with me on the relevant issues to facilitate such a plan.

JustinCEO:

anyways regarding specifically this statement

JustinCEO:

If your solution depends on people being rational, then the solution probably will not work.

JustinCEO:

i wonder if the meaning is If your solution depends on [everyone] being [completely] rational, then the solution probably will not work.

Gavin Palmer:

There is definitely some number/percentage I have thought about... like I only need 10% of the population to be "rational".

GISTE:

@Gavin Palmer can you explain your point more? what i have in mind doens't seem to match your statement. so like if 90% of the people around me weren't rational (like to what degree exactly?), then they'd be stealing and murdering so much that the police couldn't stop them.

JustinCEO:

@Gavin Palmer based on the stuff you said so far and in the google doc regarding wanting to work on important problems, you may appreciate this post

JustinCEO:

https://curi.us/2029-the-worlds-biggest-problems

JustinCEO:

Gavin says

A thing that is sacred is deemed worthy of worship. And worship is based in the words worth and ship. And so a sacred word is believed to carry great worth in the mind of the believer. So I can solve world hunger with the help of people who are able and willing. Solving world hunger is not an act done by people who uphold the word rationality above all other words.

JustinCEO:

the word doesn't matter but the concept surely does for problem-solving effectiveness

JustinCEO:

people who don't value rationality can't solve much of anything

nikluk:

Re rationality. Have you read this article and do you agree with what it says, @Gavin Palmer ?
https://fallibleideas.com/reason

GISTE:

So maybe "rationality" is related to what I call "information technology".
can you say more about that relationship? i'm not sure what you have in mind. i could guess but i think it'd be a wild guess that i'm not confident would be right. (so like i could steelman your position but i could easily be adding in my own ideas and ruin it. so i'd rather avoid that.) @Gavin Palmer

Gavin Palmer:

so like if 90% of the people around me weren't rational (like to what degree exactly?), then they'd be stealing and murdering so much that the police couldn't stop them.
I think the image of the elephant rider portrayed by Jonathan Haidt is closer to the truth when it comes to some word like rationality and reason. I actually value something like compassion above a person's intellect: and I really like people who have both. There are plenty of idiots in the world who are not going to try and steal from you or murder you. I'm just going to go through these one by one when able.

Gavin Palmer:

https://curi.us/2029-the-worlds-biggest-problems
Learning to think is very important. There were a few mistakes in that article. The big one in my opinion is the idea that 2/3 of the people can change things. On the contrary our government systems do not have any mechanism in place to learn what 2/3 of the people actually want nor any ability to allow the greatest problem solvers to influence those 2/3 of the people. We aren't even able to recognize the greatest problem solvers. Another important problem is technology which allows for this kind of information sharing so that we can actually know what the people think and we can allow the greatest problem solvers to be heard. We want that signal to rise above the noise.

The ability to solve problems is like a muscle. For me - reading books does not help me build that muscle - they only help me find better words for describing the strategies and processes which I have developed through trial and error. I am not the smartest person - I learn from trial and error.

curi:

To answer the questions: I have thought about many big problems, such as aging death, AGI, and coercive parenting/education. Yes I've considered world hunger too, though not as a major focus. I'm an (experienced) intellectual. My accomplishments are primarily in philosophy research re issues like how learning and rational discussion work. I do a lot of educational writing and discussion. https://elliottemple.com

curi:

You're underestimating the level of outlier you're dealing with here, and jumping to conclusions too much.

Gavin Palmer:

https://fallibleideas.com/reason
It's pretty good. But science without engineering is dead. That previous sentence reminds me of "faith without works is dead". I'm not a huge fan of science for the sake of science. I'm a fan of engineering and the science that helps us do engineering.

curi:

i don't thikn i have anything against engineering.

Gavin Palmer:

I'm just really interested in finding people who want to help do the engineering. It's my bias. Even more - it's my passion and my obsession.

Gavin Palmer:

Thinking and having conversations is fun though.

Gavin Palmer:

But sometimes it can feel aimless if I'm not building something useful.

curi:

My understanding of the world, in big picture, is that a large portion of all efforts at engineering and other getting-stuff-done type work are misdirected and useless or destructive.

curi:

This is for big hard problems. The productiveness of practical effort is higher for little things like making dinner today.

curi:

The problem is largely not the engineering itself but the ideas guiding it – the goals and plan.

Gavin Palmer:

I worked for the Army's missile defense program for 6 years when I graduated from college. I left because of the reason you point out. My hope was that I would be able to change things from within.

curi:

So for example in the US you may agree with me that at least around half of political activism is misdirected to goals with low or negative value. (either the red tribe or blue tribe work is wrong, plus some of the other work too)

Gavin Palmer:

Even the ones I agree with and have volunteered for are doing a shit job.

curi:

yeah

curi:

i have found a decent number of people want to "change the world" or make some big improvement, but they can't agree amongst themselves about what changes to make, and some of them are working against others. i think sorting that mess out, and being really confident the projects one works on are actually good, needs to come before implementation.

curi:

i find most people are way too eager to jump into their favored cause without adequately considering why people disagree with it and sorting out all the arguments for all sides.

Gavin Palmer:

There are many tools that don't exist which could exist. And those tools could empower any organization and their goal(s).

curi:

no doubt.

curi:

software is pretty new and undeveloped. adequate tools are much harder to name than inadequate ones.

Gavin Palmer:

adequate tools are much harder to name than inadequate ones.
I don't know what that means.

curi:

we could have much better software tools for ~everything

curi:

"~" means "approximately"

JustinCEO:

Twitter can't handle displaying tweets well. MailMate performance gets sluggish with too many emails. Most PDF software can't handle super huge PDFs well. Workout apps can't use LIDAR to tell ppl if their form is on point

curi:

Discord is clearly a regression from IRC in major ways.

Gavin Palmer:

🤦‍♂️

JustinCEO:

?

JustinCEO:

i find your face palm very unclear @Gavin Palmer; hope you elaborate!

Gavin Palmer:

I find sarcasm very unclear. That's the only way I know how to interpret the comments about Twitter, MailMate, PDF, LIDAR, Discord, IRC, etc.

curi:

I wasn't being sarcastic and I'm confident Justin also meant what he said literally and seriously.

Gavin Palmer:

Ok - thanks for the clarification.

JustinCEO:

ya my statements were made earnestly

JustinCEO:

re: twitter example

JustinCEO:

twitter makes it harder to have a decent conversation cuz it's not good at doing conversation threading

JustinCEO:

if it was better at this, maybe people could keep track of discussions better and reach agreement more easily

Gavin Palmer:

Well - I have opinions about Twitter. But to be honest - I am also trying to look at what this guy is doing:
https://github.com/erezsh/portal-radar

It isn't a good name in my opinion - but the idea is related to having some bot collect discord data so that there can be tools which help people find the signal in the noise.

curi:

are you aware of http://bash.org ? i'm serious about major regressions.

JustinCEO:

i made an autologging system to make discord chat logs on this server so people could pull information (discussions) out of them more easily

JustinCEO:

but alas it's a rube goldberg machine of different tools running together in a VM, not something i can distribute

Gavin Palmer:

Well - it's a good goal. I'm looking to add some new endpoints in a pull request to the github repo I linked above. Then I could add some visualizations.

Another person has built a graphql backend (which he isn't sharing open source) and I have created some of my first react/d3 components to visualize his data.
https://portal-projects.github.io/users/

Gavin Palmer:

I think you definitely want to write the code in a way that it can facilitate collaboration.

curi:

i don't think this stuff will make much difference when people don't know what a rational discussion is and don't want one.

curi:

and don't want to use tools that already exist like google groups.

curi:

which is dramatically better than twitter for discussion

Gavin Palmer:

I'm personally interested in something which I have titled "Personality Targeting with Machine Learning".

Gavin Palmer:

My goal isn't to teach people to be rational - it is to try and find people who are trying to be rational.

curi:

have you identified which philosophical schools of thought it's compatible and incompatible with? and therefore which you're betting on being wrong?

curi:

it = "Personality Targeting with Machine Learning".

Gavin Palmer:

Ideally it isn't hard coded or anything. I could create multiple personality profiles. Three of the markets I have thought about using the technology in would be online dating, recruiting, and security/defense.

curi:

so no?

Gavin Palmer:

If I'm understanding you - a person using the software could create a personality that mimics a historical person for example - and then parse social media in search of people who are saying similar things.

Gavin Palmer:

But I'm not exactly sure what point you are trying to make.

curi:

You are making major bets while being unaware of what they are. You may be wrong and wasting your time and effort, or even being doing something counterproductive. And you aren't very interested in this.

Gavin Palmer:

Well - from my perspective - I am not making any major bets. What is the worst case scenario?

curi:

An example worst case scenario would be that you develop an AGI by accident and it turns us all into paperclips.

Gavin Palmer:

I work with a very intelligent person that would laugh at that idea.

curi:

That sounds like an admission you're betting against it.

curi:

You asked for an example seemingly because you were unaware of any. You should be documenting what bets you're making and why.

Gavin Palmer:

I won't be making software that turns us all into paperclips.

curi:

Have you studied AI alignment?

Gavin Palmer:

I have been writing software for over a decade. I have been using machine learning for many months now. And I have a pretty good idea of how the technology I am using actually works.

curi:

So no?

Gavin Palmer:

No. But if it is crap - do you want to learn why it is crap?

curi:

I would if I agreed with it, though I don't. But a lot of smart people believe it.

curi:

They have some fairly sophisticated reasons, which I don't think it's reasonable to bet against from a position of ignorance.

Gavin Palmer:

Our ability to gauge if someone has understanding on a given subject is relative to how much understanding we have on that subject.

curi:

Roughly, sure. What's your point?

Gavin Palmer:

First off - I'm not sure AGI is even possible. I love to play with the idea. And I would love to get to a point where I get to help build a god. But I am not even close to doing that at this point in my career.

curi:

So what?

Gavin Palmer:

You think there is a risk I would build something that turns humans into paperclips.

curi:

I didn't say that.

Gavin Palmer:

You said that is the worst case scenario.

curi:

Yes. It's something you're betting against, apparently without much familiarity with the matter.

curi:

Given that you don't know much about it, you aren't in a reasonable position to judge how big a risk it is.

curi:

So I think you're making a mistake.

curi:

The bigger picture mistake is not trying to figure out what bets you're making and why.

curi:

Most projects have this flaw.

Gavin Palmer:

My software uses algorithms to classify input data.

curi:

So then, usually, somewhere on the list of thousands of bets being made, are a few bad ones.

curi:

Does this concept make sense to you?

Gavin Palmer:

Love is most important in my hierarchy of values.

Gavin Palmer:

If I used the word in a sentence I would still want to capitalize it.

curi:

is that intended to be an answer?

Gavin Palmer:

Yes - I treat Love in a magical way. And you don't like magical thinking. And so we have very different world views. They might even be incompatible. The difference between us is that I won't be paralyzed by my fears. And I will definitely make mistakes. But I will make more mistakes than you. The quality and quantity of my learning will be very different than yours. But I will also be reaping the benefits of developing new relationships with engineers, learning new technology/process, and building up my portfolio of open source software.

curi:

You accuse me of being paralyzed by fears. You have no evidence and don't understand me.

curi:

Your message is not loving or charitable.

curi:

You're heavily personalizing while knowing almost nothing about me.

JustinCEO:

i agree

JustinCEO:

also, magical thinking can't achieve anything

curi:

But I will also be reaping the benefits of developing new relationships with engineers

curi:

right now you seem to be trying to burn a bridge with an engineer.

curi:

you feel attacked in some way. you're experiencing some sort of conflict. do you want to use a rational problem solving method to try to address this?

curi:

J, taking my side here will result in him feeling ganged up on. I think it will be counterproductive psychologically.

doubtingthomas:

J, taking my side here will result in him feeling ganged up on. I think it will be counterproductive psychologically.
Good observation. Are you going to start taking these considerations into account in future conversations?

curi:

I knew that years ago. I already did take it into account.

curi:

please take this tangent to #fi

GISTE:

also, magical thinking can't achieve anything
@JustinCEO besides temporary nice feelings. Long term its bad though.

doubtingthomas:

yeah sure

JustinCEO:

ya sure GISTE, i meant achieve something in reality

curi:

please stop talking here. everyone but gavin

Gavin Palmer:

You talked about schools of philosophy, AI alignment, and identifying the hidden bets. That's a lot to request of someone.

curi:

Thinking about your controversial premises and civilizational risks, in some way instead of ignoring the matter, is too big an ask to expect of people before they go ahead with projects?

curi:

Is that what you mean?

Gavin Palmer:

I don't see how my premises are controversial or risky.

curi:

Slow down. Is that what you meant? Did I understand you?

Gavin Palmer:

I am OK with people thinking about premises and risks of an idea and discussing those. But in order to have that kind of discussion you would need to understand the idea. And in order to understand the idea - you have to ask questions.

curi:

it's hard to talk with you because of your repeated unwillingness to give direct answers or responses.

curi:

i don't know how to have a productive discussion under these conditions.

Gavin Palmer:

I will try to do better.

curi:

ok. can we back up?

Thinking about your controversial premises and civilizational risks, in some way instead of ignoring the matter, is too big an ask to expect of people before they go ahead with projects?

did i understand you, yes or no?

Gavin Palmer:

no

curi:

ok. which part(s) is incorrect?

Gavin Palmer:

The words controversial and civilizational are not conducive to communication.

curi:

why?

Gavin Palmer:

They indicate that you think you understand the premises and the risks and I don't know that you understand the idea I am trying to communicate.

curi:

They are just adjectives. They don't say what I understand about your project.

Gavin Palmer:

Why did you use them?

curi:

Because you should especially think about controversial premises rather than all premises, and civilizational risks more than all risks.

curi:

And those are the types of things that were under discussion.

curi:

A generic, unqualified term like "premises" or "risks" would not accurately represent the list of 3 examples "schools of philosophy, AI alignment, and identifying the hidden bets"

Gavin Palmer:

I don't see how schools of philosophy, AI alignment, and hidden bets are relevant. Those are just meaningless words in my mind. The meaning of those words in your mind may contain relevant points. And I would be willing to discuss those points as they relate to the project. But (I think) that would also require that you have some idea of what the software does and how it is done. To bring up these things before you understand the software seems very premature.

curi:

the details of your project are not relevant when i'm bringing up extremely generic issues.

curi:

e.g. there is realism vs idealism. your project takes one side, the other, or is compatible with both. i don't need to know more about your project to say this.

curi:

(or disagrees with both, though that'd be unusual)

curi:

it's similar with skepticism or not.

curi:

and moral relativism.

curi:

and strong empiricism.

curi:

one could go on. at length. and add a lot more using details of your project, too.

curi:

so, there exists some big list. it has stuff on it.

curi:

so, my point is that you ought to have some way of considering and dealing with this list.

curi:

some way of considering what's on it, figuring out which merit attention and how to prioritize that attention, etc.

curi:

you need some sort of policy, some way to think about it that you regard as adequate.

curi:

this is true of all projects.

curi:

this is one of the issues which has logical priority over the specifics of your project.

curi:

there are generic concepts about how to approach a project which take precedence over jumping into the details.

curi:

do you think you understand what i'm saying?

Gavin Palmer:

I think I understand this statement:

there are generic concepts about how to approach a project which take precedence over jumping into the details.

curi:

ok. do you agree with that?

Gavin Palmer:

I usually jump into the details. I'm not saying you are wrong though.

curi:

ok. i think looking at least a little at the big picture is really important, and that most projects lose a lot of effectiveness (or worse) due to failing to do this plus some common errors.

curi:

and not having any conscious policy at all regarding this issue (how to think about the many premises you are building on which may be wrong) is one of the common errors.

curi:

i think being willing to think about things like this is one of the requirements for someone who wants to be effective at saving/changing/helping the world (or themselves individually)

Gavin Palmer:

But I have looked at a lot of big picture things in my life.

curi:

cool. doesn't mean you covered all the key ones. but maybe it'll give you a head start on the project planning stuff.

Gavin Palmer:

So do you have an example of a project where it was done in a way that is satisfactory in your mind?

curi:

hmm. project planning steps are broadly unpublished and unavailable for the vast majority of projects. i think the short answer is no one is doing this right. this aspect of rationality is ~novel.

curi:

some ppl do a more reasonable job but it's really hard to tell what most ppl did.

curi:

u can look at project success as a proxy but i don't think that'll be informative in the way you want.

Gavin Palmer:

I'm going to break soon, but I would encourage you to think about some action items for you and I based around this ideal form of project planning. I have real-world experience with various forms of project planning to some degree or another.

curi's Monologue

curi:

the standard way to start is to brainstorm things on the list

curi:

after you get a bunch, you try to organize them into categories

curi:

you also consider what is a reasonable level of overhead for this, e.g. 10% of total project resource budget.

curi:

but a flat percentage is problematic b/c a lot of the work is general education stuff that is reusable for most projects. if you count your whole education, overhead will generally be larger than the project. if you only count stuff specific to this project, you can have a really small overhead and do well.

curi:

stuff like reading and understanding/remembering/taking-notes-on/etc one overview book of philosophy ideas is something that IMO should be part of being an educated person who has appropriate background knowledge. but many ppl haven't done it. if you assign the whole cost of that to a one project it can make the overhead ratio look bad.

curi:

unfortunately i think a lot of what's in that book would be wrong and ignore some more important but less famous ideas. but at least that'd be a reasonable try. most ppl don't even get that far.

curi:

certainly a decent number of ppl have done that. but i think few have ever consciously considered "which philosophy schools of thought does my project contradict? which am i assuming as premises and betting my project success on? and is that a good idea? do any merit more investigation before i make such a bet?" ppl have certainly considered such things in a disorganized, haphazard way, which sometimes manages to work out ok. idk that ppl have done this by design in that way i'm recommending.

curi:

this kind of analysis has large practical consequences, e.g. > 50% of "scientific research" is in contradiction to Critical Rationalist epistemology, which is one of the more famous philosophies of science.

curi:

IMO, consequently it doesn't work and the majority of scientists basically waste their careers.

curi:

most do it without consciously realizing they are betting their careers on Karl Popper being wrong.

curi:

many of them do it without reading any Popper book or being able to name any article criticizing Popper that they think is correct.

curi:

that's a poor bet to make.

curi:

even if Popper is wrong, one should have more information before betting against him like that.

curi:

another thing with scientists is the majority bet their careers on a claim along the lines of "college educations and academia are good"

curi:

this is a belief that some of the best scientists have disagreed with

curi:

a lot of them also have government funding underlying their projects and careers without doing a rational investigation of whether that may be a really bad, risky thing.

curi:

separate issue: broadly, most large projects try to use reason. part of the project is that problems come up and people try to do rational problem solving – use reason to solve the problems as they come up. they don't expect to predict and plan for every issue they're gonna face. there are open controversies about what reason is, how to use it, what problem solving methods are effective or ineffective, etc.

curi:

what the typical project does is go by common sense and intuition. they are basically betting the project on whatever concept of reason they picked up here and there from their culture being adequate. i regard this as a very risky bet.

curi:

and different project members have different conceptions of reason, and they are also betting on those being similar enough things don't fall apart.

curi:

commonly without even attempting to talk about the matter or put their ideas into words.

curi:

what happens a lot when people have unverbalized philosophy they picked up from their culture at some unknown time in the past is ... BIAS. they don't actually stick to any consistent set of ideas about reason. they change it around situationally according to their biases. that's a problem on top of some of the ideas floating around our culture being wrong (which is well known – everyone knows that lots of ppl's attempts at rational problem solving don't work well)

curi:

one of the problems in the field of reason is: when and how do you rationally end (or refuse to start) conversations without agreement. sometimes you and the other guy agree. but sometimes you don't, and the guy is saying "you're wrong and it's a big deal, so you shouldn't just shut your mind and refuse to consider more" and you don't want to deal with that endlessly but you also don't want to just be biased and stay wrong, so how do you make an objective decision? preferably is there something you could say that the other guy could accept as reasonable? (not with 100% success rate, some people gonna yell at you no matter what, but something that would convince 99% of people who our society considers pretty smart or reasonable?)

curi:

this has received very little consideration from anyone and has resulted in countless disputes when people disagree about whether it's appropriate to stop a discussion without giving further answers or arguments.

curi:

lots of projects have lots of strife over this specific thing.

curi:

i also was serious about AI risk being worth considering (for basically anything in the ballpark of machine learning, like classifying big data sets) even though i actually disagree with that one. i did consider it and think it merits consideration.

curi:

i think it's very similar to physicists in 1940 were irresponsible if they were doing work anywhere in the ballpark of nuclear stuff and didn't think about potential weapons.

curi:

another example of a project management issue is how does one manage a schedule? how full should a schedule be packed with activities? i think the standard common sense ways ppl deal with this are wrong and do a lot of harm (the basic error is overfilling schedules in a way which fails to account for variance in task completion times, as explained by Eliyahu Goldratt)

curi:

i meant there an individual person's schedule

curi:

similarly there is problem of organizing the entire project schedule and coordinating people and things. this has received a ton of attention from specialists, but i think most ppl have an attitude like "trust a standard view i learned in my MBA course. don't investigate rival viewpoints". risky.

curi:

a lot of other ppl have no formal education about the matter and mostly ... don't look it up and wing it.

curi:

even riskier!

curi:

i think most projects managers couldn't speak very intelligently about early start vs. late start for dependencies off the critical path.

curi:

and don't know that Goldratt answered it. and it does matter. bad decisions re this one issue results in failed and cancelled projects, late projects, budget overruns, etc.

curi:

lots of ppl's knowledge of decision making processes extends about as far as pro/con lists and ad hoc arguing.

curi:

so they are implicitly betting a significant amount of project effectiveness on something like "my foundation of pro/con lists and ad hoc arguing is adequate knowledge of decision making processes".

curi:

this is ... unwise.

curi:

another generic issue is lying. what is a lie? how do you know when you're lying to yourself? a lot of ppl make a bet roughly like "either my standard cultural knowledge + random variance about lying is good or lying won't come up in the project".

curi:

similar with bias instead of lying.

curi:

another common, generic way projects go wrong is ppl never state the project goal. they don't have clear criteria for project success or failure.

curi:

related, it's common to make basically no attempt to estimate the resources needed to complete the project successfully and estimating the resources available and comparing those two things.

curi:

goals and resource budgeting are things some ppl actually do. they aren't rare. but they're often omitted, especially for more informal and non-business projects.

curi:

including some very ambitious change-the-world type projects, where considering a plan and what resources it'll use is actually important. a lot of times ppl do stuff they think is moving in the direction of their goal without seriously considering what it will take to actually reach their goal.

curi:

e.g. "i will do X to help the environment" without caring to consider what breakpoints exist for helping the environment that make an important difference and how much action is required to reach one.

curi:

there are some projects like "buy taco bell for dinner" that use low resources compared to what you have available (for ppl with a good income who don't live paycheck to paycheck), so you don't even need to consciously think through resource use. but a lot of bigger ones one ought to estimate e.g. how much time it'll take for success and how much time one is actually allocating to the project.

curi:

often an exploratory project is appropriate first. try something a little and see how you like it. investigate and learn more before deciding on a bigger project or not. ppl often don't consciously separate this investigation from the big project or know which they are doing.

curi:

and so they'll do things like switch to a big project without consciously realizing they need to clear up more time on their schedule to make that work.

curi:

often they just don't think clearly about what their goals actually are and then use bias and hindsight to adjust their goals to whatever they actually got done.

curi:

there are lots of downsides to that in general, and it's especially bad with big ambitious change/improve the world goals.

curi:

one of the most egregious examples of the broad issues i'm talking about is political activism. so many people are working for the red or blue team while having done way too little to find out which team is right and why.

curi:

so they are betting their work on their political team being right. if their political team is wrong, their work is not just wasted but actually harmful. and lots of ppl are really lazy and careless about this bet. how many democrats have read one Mises book or could name a book or article that they think refuses a major Mises claim?

curi:

how many republicans have read any Marx or could explain and cite why the labor theory of value is wrong or how the economic calculation argument refutes socialism?

curi:

how many haters of socialism could state the relationship of socialism to price controls?

curi:

how many of them could even give basic economic arguments about why price controls are harmful in a simple theoretical market model and state the premises/preconditions for that to apply to a real situation?

curi:

i think not many even when you just look at people who work in the field professionally. let alone if you look at people who put time or money into political causes.

curi:

and how many of them base their dismissal of solipsism and idealism on basically "it seems counterintuitive to me" and reject various scientific discoveries about quantum mechanics for the same reason? (or would reject those discoveries if they knew what they were)

curi:

if solipsism or idealism were true it'd have consequences for what they should do, and people's rejections of those ideas (which i too reject) are generally quite thoughtless.

curi:

so it's again something ppl are betting projects on in an unreasonable way.

curi:

to some extent ppl are like "eh i don't have time to look into everything. the experts looked into it and said solipsism is wrong". most such ppl have not read a single article on the topic and could not name an expert on the topic.

curi:

so their bet is not really on experts being right – which if you take that bet thousands of time, you're going to be wrong sometimes, and it may be a disaster – but their bet is actually more about mainstream opinion being right. whatever the some ignorant reporters and magazine writers claimed the experts said.

curi:

they are getting a lot of their "expert" info fourth hand. it's filtered by mainstream media, talking heads on TV, popular magazines, a summary from a friend who listened to a podcast, and so on.

curi:

ppl will watch and accept info from a documentary made by ppl who consulted with a handful of ppl who some university gave expert credentials. and the film makers didn't look into what experts or books, if any, disagree with the ones they hired.

curi:

sometimes the info presented disagrees with a majority of experts, or some of the most famous experts.

curi:

sometimes the film makers have a bias or agenda. sometimes not.

curi:

there are lots of issues where lots of experts disagree. these are, to some rough approximation, the areas that should be considered controversial. these merit some extra attention.

curi:

b/c whatever you do, you're going to be taking actions which some experts – some ppl who have actually put a lot of work into studying the matter – think is a bad idea.

curi:

you should be careful before doing that. ppl often aren't.

curi:

politics is a good example of this. whatever side you take on any current political issue, there are experts who think you're making a big mistake.

curi:

but it comes up in lots of fields. e.g. psychiatry is much less of an even split but there are a meaningful number of experts who think anti-psychotic drugs are harmful not beneficial.

curi:

one of the broad criteria for areas you should look into some before betting your project on are controversial areas. another is big risk areas (it's worse if you're wrong, like AI risk or e.g. there's huge downside risk to deciding that curing aging is a bad cause).

curi:

these are imperfect criteria. some very unpopular causes are true. some things literally no one currently believes are true. and you can't deal with every risk that doesn't violate the laws of physics. you have to estimate plausibility some.

curi:

one of the important things to consider is how long does it take to do a good job? could you actually learn about all the controversial areas? how thoroughly is enough? how do you know when you can move on?

curi:

are there too many issues where 100+ smart ppl or experts think ur initial plan is wrong/bad/dangerous, or could you investigate every area like that?

curi:

relying on the opinions of other ppl like that should not be your whole strategy! that gives you basically no chance against something your culture gets systematically wrong. but it's a reasonable thing to try as a major strategy. it's non-obvious to come up with way better approaches.

curi:

you should also try to use your own mind and judgment some, and look into areas you think merit it.

curi:

another strategy is to consider things that people say to you personally. fans, friends, anonymous ppl willing to write comments on your blog... this has some merits like you get more customized advice and you can have back and forth discussion. it's different to be told "X is dangerous b/c Y" from a book vs. a person where you can ask some clarifying questions.

curi:

ppl sometimes claim this strategy is too time consuming and basically you have to ignore ~80% of all criticism you're aware of with according to your judgment with no clear policies or principles to prevent biased judgments. i don't agree and have written a lot about this matter.

curi:

i think this kind of thing can be managed with reasonable, rational policies instead of basically giving up.

curi:

some of my writing about it: https://elliottemple.com/essays/using-intellectual-processes-to-combat-bias

curi:

most ppl have very few persons who want to share criticism with them anyway, so this article and some others have talked more about ppl with a substantial fan base who actually want to say stuff to them.

curi:

i think ppl should write down what their strategy is and do some transparency so they can be held accountable for actually doing it in addition to the strategy itself being something available for ppl to criticize.

curi:

a lot of times ppl's strategy is roughly "do whatever they feel like" which is such a bias enabler. and they don't even write down anything better and claim to do it. they will vaguely, non-specifically say they are doing something better. but no actionable or transparent details.

curi:

if they write something down they will want it to actually be reasonable. a lot of times they don't even put their policies into words into their own head. when they try to use words, they will see some stuff is unreasonable on their own.

curi:

if you can get ppl to write anything down what happens next is a lot of times they don't do what they said they would. sometimes they are lying pretty intentionally and other times they're just bad at it. either way, if they recognize their written policies are important and good, and then do something else ... big problem, even in their own view.

curi:

so what they really need are policies which some clear steps and criteria where it's really easy to tell if they are being done or not. just just vague stuff about using good judgment or doing lots of investigation of alternative views that represent material risks to the project. actual specifics like a list of topic areas to survey the current state of expert knowledge in with a blog post summarizing the research for each area.

curi:

as in they will write a blog post that gives info about things like what they read and what they think of it, rather than them just saying they did research and their final conclusion.

curi:

and they should have written policies about ways critics can get their attention, and for in what circumstances they will end or not start a conversation to preserve time.

curi:

if you don't do these things and you have some major irrationalities, then you're at high risk of a largely unproductive life. which is IMO what happens to most ppl.

curi:

most ppl are way more interested in social status hierarchy climbing than taking seriously that they're probably wrong about some highly consequential issues.

curi:

and that for some major errors they are making, better ideas are actually available and accessible right now. it's not just an error where no one knows better or only one hermit knows better.

curi:

there are a lot of factors that make this kind of analysis much harder for ppl to accept. one is they are used to viewing many issues as inconclusive. they deal with controversies by judging one side seems somewhat more right (or sometimes: somewhat higher social status) instead of actually figuring out decisive, clear cut answers.

curi:

and they think that's just kinda how reason works. i think that's a big error and it's possible to actually reach conclusions. and ppl actually do reach conclusions. they decide one side is better and act on it. they are just doing that without having any reason they regard as adequate to reach that conclusion...

curi:

some of my writing about how to actually reach conclusions re issues http://curi.us/1595-rationally-resolving-conflicts-of-ideas

curi:

this (possibility of reaching actual conclusions instead of just saying one side seems 60% right) is a theme which is found, to a significant extent, in some of the other thinkers i most admire like Eliyahu Goldratt, Ayn Rand and David Deutsch.

curi:

Rand wrote this:

curi:

Now some of you might say, as many people do: “Aw, I never think in such abstract terms—I want to deal with concrete, particular, real-life problems—what do I need philosophy for?” My answer is: In order to be able to deal with concrete, particular, real-life problems—i.e., in order to be able to live on earth.
You might claim—as most people do—that you have never been influenced by philosophy. I will ask you to check that claim. Have you ever thought or said the following? “Don’t be so sure—nobody can be certain of anything.” You got that notion from David Hume (and many, many others), even though you might never have heard of him. Or: “This may be good in theory, but it doesn’t work in practice.” You got that from Plato. Or: “That was a rotten thing to do, but it’s only human, nobody is perfect in this world.” You got it from Augustine. Or: “It may be true for you, but it’s not true for me.” You got it from William James. Or: “I couldn’t help it! Nobody can help anything he does.” You got it from Hegel. Or: “I can’t prove it, but I feel that it’s true.” You got it from Kant. Or: “It’s logical, but logic has nothing to do with reality.” You got it from Kant. Or: “It’s evil, because it’s selfish.” You got it from Kant. Have you heard the modern activists say: “Act first, think afterward”? They got it from John Dewey.
Some people might answer: “Sure, I’ve said those things at different times, but I don’t have to believe that stuff all of the time. It may have been true yesterday, but it’s not true today.” They got it from Hegel. They might say: “Consistency is the hobgoblin of little minds.” They got it from a very little mind, Emerson. They might say: “But can’t one compromise and borrow different ideas from different philosophies according to the expediency of the moment?” They got it from Richard Nixon—who got it from William James.

curi:

which is about how ppl are picking up a bunch of ideas, some quite bad, from their culture, and they don't really know what's going on, and then those ideas effect their lives.

curi:

and so ppl ought to actually do some thinking and learning for themselves to try to address this.

curi:

broadly, a liberal arts education should have provided this to ppl. maybe they should have had it by the end of high school even. but our schools are failing badly at this.

curi:

so ppl need to fill in the huge gaps that school left in their education.

curi:

if they don't, to some extent what they are at the mercy of is the biases of their teachers. not even their own biases or the mistakes of their culture in general.

curi:

schools are shitty at teaching ppl abstract ideas like an overview of the major philosophers and shitty at teaching practical guidelines like "leave 1/3 of your time slots unscheduled" and "leave at least 1/3 of your income for optional, flexible stuff. don't take on major commitments for it"

curi:

(this is contextual. like with scheduling, if you're doing shift work and you aren't really expected to think, then ok the full shift can be for doing the work, minus some small breaks. it's advice more for ppl who actually make decisions or do knowledge work. still applies to your social calendar tho.)

curi:

(and actually most ppl doing shift work should be idle some of the time, as Goldratt taught us.)

curi:

re actionable steps, above i started with addressing the risky bets / risky project premises. with first brainstorming things on the list and organizing into categories. but that isn't where project planning starts.

curi:

it starts with more like

curi:

goal (1 sentence). how the goal will be accomplished (outline. around 1 paragraph worth of text. bullet points are fine)

curi:

resource usage for major, relevant resource categories (very rough ballpark estimates, e.g. 1 person or 10 or 100 ppl work on it. it takes 1 day, 10 days, 100 days. it costs $0, $1000, $1000000.)

curi:

you can go into more detail, those are just minimums. often fine to begin with.

curi:

for big, complicated projects you may need a longer outline to say the steps involved.

curi:

then once u have roughly a goal and a plan (and the resource estimates help give concrete meaning to the plan), then you can look at risks, ways it may fail.

curi:

the goal should be clearly stated so that someone could clearly evaluate potential outcomes as "yes that succeeded" or "no, that's a failure"

curi:

if this is complicated, you should have another section giving more detail on this.

curi:

and do that before addressing risks.

curi:

another key area is prerequisites. can do before or after risks. skills and knowledge you'll need for the project. e.g. "i need to know how wash a test tube". especially notable are things that aren't common knowledge and you don't already know or know how to do.

curi:

failure to succeed at all the prerequisites is one of the risks of a project. the prerequisites can give you some ideas about more risks in terms of intellectual bets being made.

curi:

some prerequisites are quite generic but merit more attention than they get. e.g. reading skill is something ppl take for granted that they have, but it's actually an area where most ppl could get value from improving. and it's pretty common ppl's reading skills are low enough that it causes practical problems if they try to engage with something. this is a common problem with intellectual writing but it comes up plenty with mundane things like cookbooks or text in video games that provides information about what to do or how an ability works. ppl screw such things up all the time b/c they find reading burdensome and skip reading some stuff. or they read it fast, don't understand it, and don't have the skill to realize they missed stuff.)

curi:

quite a few writers are not actually as good at typing as they really ought to be, and it makes their life significantly worse and less efficient.

curi:

and non-writers. cuz a lot of ppl type stuff pretty often.

curi:

and roughly what happens is they add up all these inefficiencies and problems, like being bad at typing and not knowing good methods for resolving family conflicts, and many others, and the result is they are overwhelmed and think it'd be very hard to find time to practice typing.

curi:

their inefficiencies take up so much time they have trouble finding time to learn and improve.

curi:

a lot of ppl's lives look a lot like that.


Elliot Temple on May 25, 2020

Messages (53)

the reason most ppl think addressing *all* known problems, or dealing with *all* the major risks, or anything like that, would take too long, is they think even dealing with *one* would take ~forever.

e.g. consider capitalism vs. socialism. most ppl think they couldn't reach a (rational, objective) conclusive answer to that issue given a whole lifetime to focus on it. (and indeed they don't know currently how to reach a rational, objective conclusion about it.)

the problem has basically nothing to do with there being too many issues to deal with and is really about ppl being unable to deal with even one issue.


curi at 8:32 PM on May 25, 2020 | #16578 | reply | quote

Typo in 1st paragraph:

> What claims are they better their project success on the correctness of?

better -> betting


Andy Dufresne at 8:50 AM on May 26, 2020 | #16579 | reply | quote

#16611

> Humans have the potential to solve many big problems which are not currently being solved. For example, we could eventually be living in a world where malnutrition and starvation is avoidable. The main reason we live in a world where humans suffer from malnutrition is because humans have not figured out how to work together effectively. I have broadly called this general problem of humans not working together effectively “HRP”.

You don't explain what that means. You don't give examples. You don't argue your case.

Does "have not figured out how to work together effectively" refer to initiating force? If no, isn't that a relevant and especially high priority problem? If yes, the post doesn't seem to say much about how to deal with it.


Dagny at 4:45 PM on June 4, 2020 | #16613 | reply | quote

#16613

> You don't explain what that means. You don't give examples. You don't argue your case.

> Does "have not figured out how to work together effectively" refer to initiating force? If no, isn't that a relevant and especially high priority problem? If yes, the post doesn't seem to say much about how to deal with it.

I guess I need to share my imagination of a future where humans can avoid starvation and malnutrition. I will begin writing these ideas down into a new blog post. BTW - I don’t get notified when you respond.


Gavin Palmer at 4:19 AM on June 7, 2020 | #16622 | reply | quote

#16624 You aren't engaging with relevant existing knowledge, e.g. political philosophy and economics in general. This is the same error as someone trying to propose new ideas about physics who is unfamiliar with existing knowledge about physics and math.

And I don't think this discussion will be productive without some organizing methods and principles, e.g. the idea tree method.


curi at 10:46 AM on June 7, 2020 | #16625 | reply | quote

#16625 - how much knowledge exists which attempts to deal with an age of automation whereby all necessities can be made available to all people without need for human labor?


Gavin Palmer at 1:00 PM on June 7, 2020 | #16628 | reply | quote

#16628 There is already *tons* of relevant knowledge. Have you ever looked?


curi at 1:16 PM on June 7, 2020 | #16629 | reply | quote

I’ve read tons of science fiction.


Gavin Palmer at 1:20 PM on June 7, 2020 | #16630 | reply | quote

#16629 - are you concerned that I would destroy the world?


Gavin Palmer at 1:22 PM on June 7, 2020 | #16631 | reply | quote

#16630 So you didn't look at non-fiction like economics or political philosophy?


curi at 1:26 PM on June 7, 2020 | #16632 | reply | quote

#16632 - I have had many encounters with economics and political philosphy. It was never obvious why that information was terribly important because it never helped me. It only gave me words to describe things that weren’t relevant. It was not useful knowledge. And any attempt for me to point out all the ways that people are wrong didn’t seem like a valuable use of my time. I could go back and criticize those writings now that I have a community to share with. But I obviously can’t know how it will help me create powerful technology which can give people more freedom. Obviously - I don’t foresee a deep investigation into economics or political philosophy revealing anything of importance. But I am willing to give it another try now if you are willing to point me at something you think will be useful and relevant with regards to my goals.


Gavin Palmer at 1:39 PM on June 7, 2020 | #16633 | reply | quote

#16632 - I spent a lot of time in The Zeitgeist Movement forums where many ideas were discussed and nothing useful was created.


Gavin Palmer at 1:41 PM on June 7, 2020 | #16634 | reply | quote

Do you know, in overview, what scarcity is and what economics has to say about it?


curi at 1:49 PM on June 7, 2020 | #16635 | reply | quote

> [Economics] was not useful knowledge. And any attempt for me to point out all the ways that people are wrong didn’t seem like a valuable use of my time.

Did you write down any of the ways that people are wrong?

Did you, say, write down one error (or citation to an error written by someone else) per major school of economic thought?


curi at 1:52 PM on June 7, 2020 | #16636 | reply | quote

#16641 The post doesn't attempt to answer my questions. We're having a discussion methodology conflict. I am more in favor of answering questions than you are and/or I have a different concept of what an answer is. I don't really know what you're doing or why. Is your approach in words right now, before I asked (the question is not whether you could put it in words, but whether it *already was*)? I think it's not but I don't want to assume.


curi at 11:56 AM on June 8, 2020 | #16642 | reply | quote

Attributing my comments to "people wonder" (unlinked), instead of e.g. my name with a permalink, is unacceptable.


curi at 12:07 PM on June 8, 2020 | #16643 | reply | quote

#16643 - I updated the page. But I thought there were others who had similar questions in the discord server.


Gavin Palmer at 5:07 AM on June 9, 2020 | #16646 | reply | quote

#16642 - I have written much of these ideas down many times throughout the last decade. Some of these ideas were written a decade ago in TZM forums which are no longer available. I sometimes do mind reading. I imagined that you were actually interested in my economics knowledge instead of being interested in whether I have actually written my ideas down.


Gavin Palmer at 5:12 AM on June 9, 2020 | #16647 | reply | quote

#16647 I asked short, direct questions. You did not answer them and still haven't (semi-answer to 1 out of 3 now). If you would bear with me and participate in back-and-forth conversation, or ask about the purpose of the questions, you could find out why I asked them. You don't have to jump to conclusions about e.g. my disinterest in substance.

I don't think this discussion will be productive without some way of organizing it. We need an approach to discussion that we both use. I think winging it and relying on intuition and preconceptions isn't going to work. Are you open to trying to do that?

And please stop trying to do mind reading and instead respond to what I write. (This is a major idea of ~all rational, effective ways to organize intellectual discussions.)


curi at 12:28 PM on June 9, 2020 | #16649 | reply | quote

#16649 - Before I answer some question, I would like to know why that question is valuable. In the software world - we talk about “rabbit holes” where people can spend a lot of time and do nothing valuable. I would like to avoid rabbit holes in conversation if that is OK with you.

My goal is to do valuable work. I think your goal is to help me think more clearly, but it would be good for you to state your goal. Does your goal have anything to do with valuable work?

https://herolfg.com/posts/my-philosophy-knowledge/


Gavin Palmer at 5:16 AM on June 10, 2020 | #16658 | reply | quote

> Before I answer some question, I would like to know why that question is valuable.

Then respond by asking that instead of by giving a non-answer.

Repeating what you didn't answer nor ask about:

>> I don't think this discussion will be productive without some way of organizing it. We need an approach to discussion that we both use. I think winging it and relying on intuition and preconceptions isn't going to work. Are you open to trying to do that?

My goal with you is to have a productive discussion where we both learn and there is successful communication. We currently aren't on the same page about how to accomplish that, which I'm trying to address.


curi at 12:43 PM on June 10, 2020 | #16659 | reply | quote

>>> I don't think this discussion will be productive without some way of organizing it. We need an approach to discussion that we both use. I think winging it and relying on intuition and preconceptions isn't going to work.

I think winging it would work if we didn’t have any agenda and were genuinely interested in learning.

>>> Are you open to trying to do that?

Yes. I am open to having a conversation which is bound by rules.

> My goal with you is to have a productive discussion where we both learn and there is successful communication. We currently aren't on the same page about how to accomplish that, which I'm trying to address.

I would like to learn valuable knowledge. Learning without reason is a trap.


Gavin Palmer at 6:18 AM on June 11, 2020 | #16661 | reply | quote

> I think winging it would work if we didn’t have any agenda and were genuinely interested in learning.

I disagree but I don't think we need to discuss it because you said you'd try organized discussion.

And yes, I meant learning things that the learner considers worthwhile, not things he considers valueless.

So will you use idea trees? See https://curi.us/2311-making-idea-trees


curi at 11:29 AM on June 11, 2020 | #16666 | reply | quote

> So will you use idea trees? See https://curi.us/2311-making-idea-trees

I am willing to try them out. How should we start? What is the topic?


Gavin Palmer at 1:33 PM on June 11, 2020 | #16669 | reply | quote

#16669 The first step would be you learning to use them. Read through the linked info and start practicing?


curi at 1:34 PM on June 11, 2020 | #16670 | reply | quote

#16670

I went through your video and read your posts. I gave it a try:


Gavin Palmer at 6:12 AM on June 12, 2020 | #16675 | reply | quote

#16675 I don't know what you mean by "an intentional person" or why "teams" is a sub-category of (what seems to be) a type of person.

I think you have some idea in your mind about what the tree is for/about which isn't being shared in words.


curi at 1:10 PM on June 12, 2020 | #16678 | reply | quote

#16678 - I should rename the root node “how to do good work”.


Gavin Palmer at 1:52 PM on June 12, 2020 | #16679 | reply | quote

> I should rename the root node “how to do good work”.

OK. That makes more sense. Now I'm looking at the value branch because it's on top (first).

There are many types of value. The nodes under "value" don't clarify what you're writing about except partially, indirectly by mentioning money and externalities.

Value is not always related to money b/c e.g. friendship is a value. And education. And trust. And fame. And political power. And flying under the radar so you aren't doxed. And avoiding politics so you aren't frustrated all the time. And much more.

Also I don't know why:

> Value is not always related to money because of externalities.

is a child of

> Value depends on all the costs and benefits.

What is the connection between the two?

I think you should begin practicing with organizing existing stuff into a tree instead of trying to create stuff while also creating a tree. Practice should initially focus on one thing at a time. Otherwise you run into too many problems at once. Does that make sense?

For example, could you make a tree of 1-2+3?

And what about of this:

http://www.szasz.com/manifesto.html

> 3. *Presumption of competence.* Because being accused of mental illness is similar to being accused of crime, we ought to presume that psychiatric "defendants" are mentally competent, just as we presume that criminal defendants are legally innocent. Individuals charged with criminal, civil, or interpersonal offenses ought never to be treated as incompetent solely on the basis of the opinion of mental health experts. Incompetence ought to be a judicial determination and the "accused" ought to have access to legal representation and a right to trial by jury.

There are often multiple different valid ways to make trees from text. The goal is a good, useful tree and to avoid any clear errors. Some judgment is needed – it's not just a mechanical process. Level of detail is an area that's particularly flexible. You could break down every sentence and analyze its components. And the entire paragraph could be one node in a bigger tree. In this case, I suggest doing one node per clause.

This may be too hard but I guessed you'd want to at least try it before maybe doing something simpler. So it's a real example of intellectual discourse from an (IMO) top thinker.


curi at 2:43 PM on June 12, 2020 | #16681 | reply | quote

> I went through your video and read your posts.

Which video?


curi at 3:14 PM on June 12, 2020 | #16682 | reply | quote

>> I went through your video and read your posts.

> Which video?

https://youtu.be/lkyKXRijwYQ


Gavin Palmer at 5:53 AM on June 13, 2020 | #16687 | reply | quote

>> I should rename the root node “how to do good work”.

> OK. That makes more sense. Now I'm looking at the value branch because it's on top (first).

I didn’t prioritize the root node children from top to bottom. My first impression of the tree was that the depth and number of descendants within indicate what I prioritize. BTW - I started reading Goldratt’s “It’s Not LUCK” yesterday and I am enjoying it.

> There are many types of value. The nodes under "value" don't clarify what you're writing about except partially, indirectly by mentioning money and externalities.

I didn’t think there was a need to go deeper into this subtree. I didn’t think I needed to try and go into all of the different kinds of costs and benefits. I just wanted to recognize the truth that the monetary price of a good or service does not always relate to the actual value.

> Value is not always related to money b/c e.g. friendship is a value. And education. And trust. And fame. And political power. And flying under the radar so you aren't doxed. And avoiding politics so you aren't frustrated all the time. And much more.

Taking children seriously isn’t guaranteed to be a financially profitable endeavor.

> Also I don't know why:

>> Value is not always related to money because of externalities.

> is a child of

>> Value depends on all the costs and benefits.

> What is the connection between the two?

An externality can be a cost or a benefit. An externality is a broad type of costs and benefits which should be considered when doing good work.

> I think you should begin practicing with organizing existing stuff into a tree instead of trying to create stuff while also creating a tree. Practice should initially focus on one thing at a time. Otherwise you run into too many problems at once. Does that make sense?

> For example, could you make a tree of 1-2+3?

2 + 3 = 5

5 - 1 = 4

> And what about of this:

> http://www.szasz.com/manifesto.html

>> 3. *Presumption of competence.* Because being accused of mental illness is similar to being accused of crime, we ought to presume that psychiatric "defendants" are mentally competent, just as we presume that criminal defendants are legally innocent. Individuals charged with criminal, civil, or interpersonal offenses ought never to be treated as incompetent solely on the basis of the opinion of mental health experts. Incompetence ought to be a judicial determination and the "accused" ought to have access to legal representation and a right to trial by jury.

An analysis on this tree could lead to a conversation about the problems of our judicial systems and the problems with systems in general.

> There are often multiple different valid ways to make trees from text. The goal is a good, useful tree and to avoid any clear errors. Some judgment is needed – it's not just a mechanical process. Level of detail is an area that's particularly flexible. You could break down every sentence and analyze its components. And the entire paragraph could be one node in a bigger tree. In this case, I suggest doing one node per clause.

> This may be too hard but I guessed you'd want to at least try it before maybe doing something simpler. So it's a real example of intellectual discourse from an (IMO) top thinker.

Szasz seems like a top level thinker from my cursory overview.

http://www.szasz.com/cognitiveliberties.html

I am most concerned with figuring out how to make top level thinkers have more influence and bottom level thinkers have less influence.


Gavin Palmer at 6:50 AM on June 13, 2020 | #16688 | reply | quote

> > What is the connection between the two?

> An externality can be a cost or a benefit. An externality is a broad type of costs and benefits which should be considered when doing good work.

That doesn't answer the question. It makes two claims about externalities. It makes zero statements about what the connection is.

> > For example, could you make a tree of 1-2+3?

> 2 + 3 = 5

> 5 - 1 = 4

That response isn't a tree. It's also indecipherable or wrong as an arithmetic answer.

>> And what about [making a tree] of this:

> An analysis on this tree could lead to a conversation about the problems of our judicial systems and the problems with systems in general.

An analysis "on" (of) what tree? You didn't make a tree (as I had suggested as a next step).

Three times in a row (the examples above in this message; your other text was problematic too btw) you were not responsive to what I said. Your replies don't engage with me. I don't know where to go from here because you seem to be unwilling or unable to write text which is responsive to what I say, and you seem to have already stopped trying to learn a discussion method (trees) that would help with that problem, right after expressing your willingness to learn it.


curi at 1:15 PM on June 13, 2020 | #16689 | reply | quote

>>> What is the connection between the two?

>> An externality can be a cost or a benefit. An externality is a broad type of costs and benefits which should be considered when doing good work.

> That doesn't answer the question. It makes two claims about externalities. It makes zero statements about what the connection is.

There is a “is a” connection and a “has a” relationship. The externality “is a” type of cost and benefit. The fact that externalities exist “has a” impact on what it means to do good work. (I am typing this to help you see how my brain creates trees. In software, the “is a” and “has a” relationships can be used quite often and they are valuable.)

Do you object to “is-a” and “has-a” relationships? It seems Goldratt didn’t use these kinds of relationships.

>>> For example, could you make a tree of 1-2+3?

>> 2 + 3 = 5

>> 5 - 1 = 4

> That response isn't a tree. It's also indecipherable or wrong as an arithmetic answer.

I tried to have the answer use indentations to indicate a tree in the response. Thank you for pointing out my math mistake. (I was willing to try to do a small tree without some external tool. The cost was low to try and do this simple tree. Obviously my brain didn’t treat the simple problem seriously enough because I didn’t double check my work.)

1-2+3

{tab}1-2=-1

{tab}{tab}-1+3=2

1-2+3

{tab}3-2=1

{tab}{tab}1+1=2

>>> And what about [making a tree] of this:

>> An analysis on this tree could lead to a conversation about the problems of our judicial systems and the problems with systems in general.

> An analysis "on" (of) what tree? You didn't make a tree (as I had suggested as a next step).

I make trees in my head. I chose not to do this tree and instead try to do the simple math tree. I’m trying to intentionally consider the cost and benefit of my actions.


Gavin Palmer at 3:37 AM on June 14, 2020 | #16692 | reply | quote

#16692 If you were writing a program in lisp to check 2 + 3 = 5 how would you write it?


oh my god it's turpentine at 12:43 PM on June 14, 2020 | #16694 | reply | quote

Impasse

I am familiar with both "is a" and "has a" relationships, from both philosophy and software. I believe you're using them incorrectly.

Your answer regarding the math tree is both unclear (again) and wrong (again).

We're at an impasse. You:

1. Discuss in ways I regard as disorganized, chaotic, ineffective and unproductive.

2. Have not proposed any solutions, such as an organized discussion method.

3. Agreed to learn and use the method I suggested (involving trees), but have not followed up by acting accordingly. E.g. you apparently don't want to make the effort to actually make the math tree or the clause tree (you also don't seem to know how and aren't asking questions to learn, but you could have at least made the effort to use MindNode).

I see no way forward because there is no discussion method you're willing and able to use to enable discussion to be productive.

From my pov (point of view), you're making a stream of errors which are mostly not being corrected, including arithmetic errors, but you persist in trying to say and deal with sophisticated, complex things (which you get wrong) instead of focusing on things that would be successful.

You have not provided any way of proceeding that make sense from my pov, and you haven't been cooperative with my attempts at problem solving. Saying you agree with stuff and not acting accordingly is actually worse for me, and harder to deal with, than expressing disagreement.

So we're at an impasse.


curi at 2:56 PM on June 14, 2020 | #16695 | reply | quote

> I am familiar with both "is a" and "has a" relationships, from both philosophy and software. I believe you're using them incorrectly.

You could try to explain why you believe I am using these relationships incorrectly?

> Your answer regarding the math tree is both unclear (again) and wrong (again).

Can you do the tree in your comment section without using a link to an image?

> So we're at an impasse.

You have to be willing to learn if you want to have a conversation. I have proven my willingness to learn. I don’t believe there needs to be anything more than a sincere willingness to learn.


Gavin Palmer at 4:18 AM on June 15, 2020 | #16699 | reply | quote

#16694 - I have never used lisp before. I can use javascript, python, php proficiently.


Gavin Palmer at 4:19 AM on June 15, 2020 | #16700 | reply | quote

Impasse 2

#16699 Impasse #2: the reply to impasse #1 did not engage with impasse #1.

This is now a length 2 impasse chain.


curi at 12:47 PM on June 15, 2020 | #16701 | reply | quote

#16701 - If I am asking you to show me how the math tree should look in your opinion and I have showed you how I think it should look in my opinion and you are unwilling to put forth the effort to show what the math tree should look like then that is evidence that you share some responsibility in the impasse and you aren’t interested in helping me learn to use the conversation tree the way you think it should be used.


Gavin Palmer at 10:40 AM on June 16, 2020 | #16705 | reply | quote

Impasse 3

#16705 Impasse #3: the reply to impasse #2 did not engage with impasse #2.

This is now a length 3 impasse chain.


curi at 12:05 PM on June 16, 2020 | #16708 | reply | quote

#16695 - If I am asking you to show me how the math tree should look in your opinion and I have showed you how I think it should look in my opinion and you are unwilling to put forth the effort to show what the math tree should look like then that is evidence that you share some responsibility in the impasse and you aren’t interested in helping me learn to use the conversation tree the way you think it should be used.


Gavin Palmer at 8:10 AM on June 17, 2020 | #16712 | reply | quote

Impasse 4

#16705 Impasse #4: No response to impasse #3 ( #16708 ). Continuing to ignore my pov by attempting to continue discussion that I already expressed an impasse with.


curi at 9:54 AM on June 17, 2020 | #16714 | reply | quote

> 3. Agreed to learn and use the method I suggested (involving trees), but have not followed up by acting accordingly. E.g. you apparently don't want to make the effort to actually make the math tree or the clause tree (you also don't seem to know how and aren't asking questions to learn, but you could have at least made the effort to use MindNode).

I put forth effort. So this is not true. A tree can be done like this:

root-node-id1

{tab}child-of-id1-with-id2

{tab}child-of-id1-with-id3

{tab}{tab}child-of-id3-with-id4


Gavin Palmer at 4:00 PM on June 17, 2020 | #16715 | reply | quote

Impasse 5

#16715 Impasse #5: No response to impasse #4.

I'm ending the conversation here and I'm not interested in having any other conversations with you. I have better things to do.

As a way forward now, I suggest solo learning or talking with someone else interested in FI.


curi at 12:14 PM on June 18, 2020 | #16721 | reply | quote

math tree using mermaid-js format

graph TD

A[1 - 2 + 3] -->B[+]

B --> C[-]

B --> D[3]

C --> E[1]

C --> F[2]

It would be nice to see a preview in the comment area. I could play with using an svg or image link provided by mermaid-js.


Gavin Palmer at 6:19 AM on June 25, 2020 | #16775 | reply | quote

#16775

I've played with trying to do this before automatically. There are issues with rendering it here, let alone on a mostly empty docs page (as was my case). You should just use the live editor and save or link the image. Much easier. https://mermaid-js.github.io/mermaid-live-editor/.

You did, at least, make a tree that time. If you use mermaid much, you'll find they get to be painful after a short while (like handling multiline text and quotations). If you do try and do that, you'll need to know `a1["Some<br/>text<br/>— yup"]`. For the most part `<br>` works too, but the error message is indecipherable when it doesn't.


Anonymous at 10:04 AM on June 25, 2020 | #16776 | reply | quote

#16776

I had to find and watch some of Elliot's videos to get an idea of what the tree is expected to look like.

I was thinking it could be fun to add functionality to the mermaid-live-editor in order to allow the source data to be created automatically as you interact with the tree view area. It could be a good starting point to build an interactive discussion tool which is based off of a tree view.

I will attempt to render it here without saving the image to a place I control (note that I am thinking about creating my own discussion plugin and think a preview feature would be nice):

![](image url)

[](https://mermaid.ink/img/eyJjb2RlIjoiZ3JhcGggVERcblxuQVsxIC0gMiArIDNdIC0tPkJbK11cblxuQiAtLT4gQ1stXVxuXG5CIC0tPiBEWzNdXG5cbkMgLS0-IEVbMV1cblxuQyAtLT4gRlsyXSIsIm1lcm1haWQiOnsidGhlbWUiOiJkZWZhdWx0In0sInVwZGF0ZUVkaXRvciI6ZmFsc2V9)


Gavin Palmer at 5:03 AM on June 26, 2020 | #16786 | reply | quote

mermaid tree stuff

#16786

FYI I've got automagic tree generation based on whitespace indented trees working here: https://xertrov.github.io/fi/ex-parsing/03-ex/

(I'm writing out s-expressions that are whitespace indented, but my s-expression-to-tree code cheats by just considering whitespace and removing all the parens)

pre-compiled page: https://github.com/XertroV/fi/blob/master/docs/ex-parsing/03-ex.md

tree-generation code: https://github.com/XertroV/fi/blob/master/docs/_layouts/default.html


max at 1:10 AM on July 2, 2020 | #16819 | reply | quote

s/pre-compiled/pre-compilation/


Anonymous at 1:10 AM on July 2, 2020 | #16820 | reply | quote

Want to discuss this? Join my forum.

(Due to multi-year, sustained harassment from David Deutsch and his fans, commenting here requires an account. Accounts are not publicly available. Discussion info.)