This is Max's discussion thread. Max has agreed to only post as "Max" in this thread.
Here's the YouTube playlist where I tutor Max.
This is Max's discussion thread. Max has agreed to only post as "Max" in this thread.
Here's the YouTube playlist where I tutor Max.
Want to discuss this? Join my forum.
(Due to multi-year, sustained harassment from David Deutsch and his fans, commenting here requires an account. Accounts are not publicly available. Discussion info.)
Page loading slowly? View only the latest 30 messages.
Overreaching, greatness, and ~meta-knowledge
Consider people who are *great* (like exceptional) at something in particular.
One of the things that makes them great is ~*meta-knowledge*, like knowledge about context regarding their *actions*.
I watched a bit of a recent Sea of Thieves WR speedrun - particularly the events during 7:25:00 -> 9:00:00 (it's like a 21hr run).
They lost like 1:20:00 from a choice to steal another crews loot b/c that crew chased them for a decent while.
A third ship joined in a bit, too.
Near the end of this chase (8:49:00) they spot another sloop (ship of 2 crew) and one guy jokes about taking this new ship's loot.
The two speedrunners have been talking about what to do at this point, and particularly risk/rewards tradeoffs for how to sell the loot.
The two guys are good enough to - ordinarily - take on another sloop no problem.
after all, they just fought off 2 other crews of sizes 4 and 3.
their choice not to go after the sloop (and the humor of the joke) is based in this like ~*meta-knowledge~ type stuff.
it doesn't matter how great you are at something, even the best ppl in the world know there are some challenges they won't win (or it's too much a risk), and the choose to back off. they're not OP just because they're the best in the world.
Generalising this means something like: the ~meta-knowledge is *at least* as important as the knowledge about how to do the skill well (which is more like technical knowledge). Or, at least it's that important at high levels.
Basically, this is like "don't overreach", or rather, if you do overreach, don't expect to *still* be great. the ability to pick challenges is part of the reason great people are great. sorta like flying *close*, but not *too close*, to the sun.
It also relates to knowing your limits, either when something is too big a task or when (and what) to learn before doing it.
This offers a bit more clarity for an ongoing conflict of mine - something to do with learning styles and methods. I intuitively think that 'exploratory' style learning (with a high(er) error rate) has benefits. and I mean it's not as bad as doing nothing at all (I guess it could be sometimes), but it's not as efficient as directed and non-overreaching learning.
I think part of the reason I have this conflict is in essence thinking too much of my own skills. That's true even tho, I went through a few ~breakpoints early on in the *Tutoring Max* series. (Breakpoints might not be the right word, but I think there are like significant points of increased ~reach when we adopt new and better ideas about ourselves)
Mini post.m on post formatting
curi.us doesn't format exactly like markdown does. with markdown a newline between consecutive sentences doesn't make a new paragraph, but it does here.
I wrote the above in vscode (posted also on my site in a new category), so wrote it like normal markdown - using linebreaks to make sentences clearer, easier to read/write while editing, etc. (How the paragraphs are meant to look)
The solution is I'll need to check for that beforehand. I could write a short script to strip linebreaks between consecutive sentences, but not sure that's worth it.
Inefficient learning is like eating the seed corn
> an ongoing conflict of mine - something to do with learning styles and methods.
Sometimes I prioritize the wrong thing. I'll spend time on fun (and maybe even slightly useful) 'intellectual' activities like coding, instead of doing more structured, efficient, and goal directed learning. That's like eating the seed corn.
It's like: I end up fed, and I still have some seed corn left over, but the harvest isn't going to be as good. What's the point of learning and thinking if not for the harvest?
indirectly related: #18025 and https://curi.us/2378-eliezer-yudkowsky-is-a-fraud
#18032 Metaphorically seed corn = capital = e.g. machines. it's stuff you can save for later. time is somewhat different in that you can't save it for later. but, like money, you can spend time on things with later benefits (investment rather than consumption).
> time is somewhat different in that you can't save it for later. but, like money, you can spend time on things with later benefits (investment rather than consumption).
I'm not sure if we disagree on something or not. I think we roughly agree but I'm thinking of time spent in a specific way (just a subset of the time we get). For context, I read curi.us/2378 a few minutes before having that idea. I liked these bits particularly (and liked being reminded of them)
> capital goods, not consumption goods
> accumulation of capital increasing the productivity of labor
I think time can be sometimes seen like money and other capital goods. How do people save money? One option is a bank account, but that performs poorly, and is sort of like investing in loans/debt, anyway. Better investors save money by spending it *on capital goods* (and they choose the goods). After they spend money, they don't have it any more, but they have something else they can exchange for money later.
I think *time spent on learning* is similar, but not all time spent is similar. Granted, there's no bank account for time, but you can spend time now so you get more of it later -- that's one of the reasons to learn and think, you - in essence - get more time in the future because you avoid making mistakes or being slower than you could be. In that sense it's like investing in productive capacity. There's a higher upfront cost, but you get a higher capacity and larger RoI than the alternatives. The choice to spend time learning ineffectively seems to me like spending some chunk of your factory budget on hookers and cocaine; fun at the time, but it's in opposition to the main goal.
Similarly, by analogy, learning skills that don't end up helping you, but learning them effectively, is like market risk. Not every investment makes a profit, but diversification helps, and the better you are the less you waste.
Time spent on things like downtime is different from normal money; that's more like $100 of food stamps you both get once a week and have to spend the same week. You might only be able to spend it at low-quality grocers, but avoiding spending it only hurts you.
An alternative thing about spending time on learning is trying to spend downtime doing pseudo-learning stuff. That's more like trying to invest your $100 food stamp (not going to go well). I find trying to do ~learning stuff when I'm tired etc. often means I stay up later, sleep worse, and have less high-capacity time for important things.
Eating seed corn is like disassembling machinery for scrap metal, which is different (more destructive) than leaving it idle for a day (which sounds reasonably similar to spending a day of your time unproductively).
#18037 yeah okay, I see what you mean. I've changed my mind on the quality of my analogy. (I don't think it's super bad or anything, just not as good as I originally thought.)
Perimortem on intuitive response to #18037
My intuitive response (which would be put a bit defensively) is something like: disassembling machinery is like eating *all* the seed corn, and leaving it idle is like skimming a bit of corn off the top. Things keep working; there's still productivity and returns, but less than otherwise.
(note: I think this is valid, and it's why I don't think my analogy was all bad)
I think that intuitive response is wrong though. It's subtly moving the goal posts (similar to e.g. a "strategic" clarification), and would be expressing an idea like: "we're both right, we should blame miscommunication". That'd be dishonest though, because:
a) I didn't see some limits of the analogy that I do now - this contradicts the idea of miscommunication being a primary issue (it's not important if curi and I understood each other fully in every way; we understood each other sufficiently), and
b) the reasonable next steps from a miscommunication would be to figure out how to avoid it. Some miscommunications are due to like ~inferential distance but that doesn't make sense here. The easiest solution (if it really was miscommunication) would have been for me to be clearer originally. If I advocated that (and claimed I could have done it) I'd be pretending like there wasn't ever an issue; at the very least my lack of clarity would be an issue. Maybe I couldn't have been clearer for lack of knowledge, in which case it'd still be dishonest--and evasive--to claim a miscommunication b/c that wasn't the problem.
I don't know any way that my intuitive response would have been good, which is the reason I wrote this perimortem.
I'm not sure if putting the response in this perimortem is like a roundabout (and/or cowardly) way of trying to say the idea anyway. However, I think writing the perimortem is a better alternative than making the titular reply, so I'm satisfied for now.
> I intuitively think that 'exploratory' style learning (with a high(er) error rate) has benefits.
Whether something is an error depends on your goal. If your goal is to get it correct, exploring works badly. If your goal is e.g. to get a rough overview, exploring works well.
Max's postmortem on #18030 #18043 #18050
## Max's postmortem on #18030 #18043 #18050
IR wrote (addressing curi)
> i feel very much like i have gotten some of these ideas from you, but i dont know which things youve wrote that i got these ideas from. and i dont know how much ive changed them.
I asked IR:
> Otherwise, does it matter how much you've changed your mind?
which didn't make much sense. Context: #18030 #18043 #18050
I think 2 main things happened:
1. I wasn't careful when reading IR's comment, so missed important details / relations. (i.e. he was talking about changes to curi's ideas in his head, not changes to his own pre-existing ideas in his head)
2. I've been thinking recently about how my own ideas have changed over the last ~3 months.
(1) allowed me to ~*skip between trains of thought* without noticing. I ended up thinking about IR's comment in terms of (2). My question to IR makes more sense in this light.
Beyond the issue of miscommunication in general, there's a bigger problem I should care about and deal with. That is: responding earnestly to someone (usually) takes longer than reading what came immediately before. If I spend time responding to what I *thought* they wrote (but I'm wrong about that) then it's, in essence, wasted time. Maybe there are some benefits, but they're lesser than would be otherwise.
To avoid this sort of thing the obvious answer is reading stuff better. That doesn't feel super actionable tho b/c just concentrating more on ~*everything* I read is not v efficient, esp if this sort of issue isn't super common.
I could try re-interpreting what the person says, like re-writing out what I thought they meant before replying, but how would I know if that were right/wrong? It might make it clearer to me if I was *unclear* about what they thought. It doesn't help if I think I know what they meant and that idea is clear and consistent in my mind (as it was in this case).
This issue was - I think - that the reference "these ideas" is somewhat ambiguous (or maybe just tricky). I think IR's full sentence (expanding "them") is something like:
> and i dont know how much ive changed [my version of ideas I got from your ideas relative to the original ideas you wrote]
So, this might be a better sketch of what to do:
- recognise tricky references (ideally automatically)
- when tricky references occur, expand them out (there could be more than one possibility)
- criticise the possibilities so I get just one
- if I can't and it's ambiguous still, ask a clarifying question (listing the possibilities too)
- optionally respond to each possibility if short enough or easy enough
- if I get one and it's reasonable I can just respond
- if I get one and I'm not sure it's reasonable, ask a clarifying question and respond at the same time
the next step in this action-plan-sketch is "recognise tricky references (ideally automatically)". **The first part of that is introducing a breakpoint (in the coding sense) on tricky references.** I can do this a bit by paying more attention to references in general, trying to quickly figure out what they mean (and eval-ing if I know what they mean), and taking action if I don't. If I'm not 10/10 confident on the reference I should stop and investigate.
Okay, this feels like a decent PM and plan. Feedback welcome/appreciated. It was a bit trickier than normal to figure out what to do because a plan like 'learn2read' didn't feel good enough.
>> I intuitively think that 'exploratory' style learning (with a high(er) error rate) has benefits.
> Whether something is an error depends on your goal. If your goal is to get it correct, exploring works badly. If your goal is e.g. to get a rough overview, exploring works well.
I hadn't considered this. It makes sense. That said, I don't think it's what I had in mind.
The italicised bits of this example are a bit of an outline.
An example is the route-finding-app I made for my SSOL speedrun: *I spent way too long* trying to get the PNG of the map as a background image behind the lines and points that get drawn. *Eventually I managed it* (after lots of different attempts and integrating bits of code I found online). *The main difficulty* was that the original author of the (simple) travelling salesman program used Haskell's GLUT library which is basically a *lowish-level* OpenGL lib (and *I'm not familiar* with low level opengl stuff). There are higher-level ones that make this stuff easy. *I only really cared about the outcome but it took way longer than I wanted it to.*
I didn't read a manual or in depth tutorial, instead tried to fumble my way though. That is sometimes faster. But you can't tell stuff like 'how long is left till I finish?' and other basic questions.
In some ways my process involved exploring as you describe. I toyed with the idea of switching to a higher level library, looked for higher level stuff that exposed/integrated with the lower level stuff (no luck), and read bits from the middles of some advanced/in-depth tutorials.
But, crucially, the exploring was a side-effect of a particular problem with the other bits. I'd say my choice of method when trying to get the PNG to draw on-screen was exploratory learning, so it's different to exploration as you describe (though somewhat related).
Eventually I found some code someone had written that was close enough to what I needed to make it work. There was a weird interaction with other code I'd written tho (involving drawing text), that meant the first line of text was the right size but all the other lines didn't appear on screen. I managed to fix that but it took another like 30 min of experimentation.
A better method - in hindsight - would have been to just do a tutorial for Gloss (an alternative opengl-based library, but much higher level) and recode what I'd already written, and the opengl bits that came with the app originally. I could have gone through enough of a tutorial on Gloss given the amount of time I spent (like 5hrs+).
I did learn other stuff during that time, but I didn't feel like the time was particularly well spent. I don't expect to use OpenGL + Haskell much in future, so it's not like this is particularly useful outside this one thing I wanted to do.
In some ways I do this stuff for the challenge, like thinking "I should be able to do this, so I will", but I don't think "I should be able to do this, eventually, but should I bother, or should I look for a different way to do the same outcome?"
TCS and passions
I was thinking about a TCS issue yesterday. I have half a soln. It's about a child's passions
There's a possibly coercive idea I have that I think is the *common-er* version of the problem (maybe), then there's a more general version.
the possibly coercive version is like:
> I want my child to have a passion for maths (coercive), or
> I value passion about maths in general, and I want my child to be able to develop that if they want -- I don't want to *hinder* them (coercive?)
The second formulation feels like it could be done okay--without coercion--but I don't know enough to tell for sure.
I was thinking about this in the context of **a parent who's bad at maths**.
This made me think of a possible common issue *most* ppl would run in to if they tried TCS: *their skills/passions are inadequate (not broad enough and general enough) to avoid hindering the child.*
I think not being perfect is okay, but if we can avoid significant hindrance that's good.
One situation is if the child develops a passion for X but the parent isn't good/passionate about it, they can still buy equipment/supplies or hire tutoring or find a friend who's passionate, etc. This is the 1/2 solution I mentioned.
But more broadly, how do you facilitate the *development* of a passion before it's manifested?
One thing I was thinking about is when ppl have been passionate about something and sparked something in me. A good example is Haskell and type-safe programming; a guy at a technical meetup sold me on Haskell over a beer. It took me *years* before I actually used it in production, but I was sold in 20 min.
So exposing a child to a wide range of *passionate* people--who are probs the higher-value ppl to expose children to, anyway--is maybe one way, though that could be done corrosively. If you happen to be friends with passionate ppl and the visit and talk to your child, that feels different than like *engineering situations* to trick your child or something.
I haven't looked through the archive to see what other ppl have said on the topic, yet.
Quick thought on a secondary goal of life.
I think a good secondary goal for ones life--or maybe another primary goal as yesno supports--is to live without control. By that I mean: live so that you are both happy with your decisions consistently, and also make those decisions without willpower or self-control. All choices are somewhat like using self-control and some moral code; except that you have no animosity against those choices; they are choices you'd always make anyway. It's sort of like having no friction.
Ofc there will always be conflicts and problems to solve, but this state is like the closest you can get to that *and sustain*.
Debate Topic (via Tutoring Max 44) -- Genes and direct influence over mind
> Genes (or other biology) don’t have any direct influence over our intelligence or personality.
I'm not sure about this. I don't think humans being universal understanders/explainers means genes *don't* have a direct influence over our mind/personality (esp. starting conditions). It seems reasonable that physical effects on the brain can have an effect on our mind/thinking (e.g. brain tumors, head trauma), and genes affect things in ways we don't fully understand, so there's room for them to have a direct effect.
#18153 What sort of effect or influence do you have in mind, via what causal mechanisms?
For example, genes could make it so we're better at integer math than floating point math. I don't think this would cause someone to be more inclined to solipsism than an alien that excels at floating point math. And there could be variance among humans, but I don't think that would cause some people to be atheists.
> What sort of effect or influence do you have in mind, via what causal mechanisms?
I'm not sure about this possibility, but it's a thing I've heard or seems to be a somewhat common idea:
- temperament: Say someone has a gene that means they produce lots of some hormone. That hormone makes them angry more often / more easily.
Does this sort of thing count as a direct influence over our personality? I can see a person like this 'learning to control' themselves or something, but I'm not sure exactly what you mean by directly influencing personality.
More broadly, I see room for unknown causal mechanisms, esp. relating to things that make sense to have evolutionary roles, like social stuff. I could see some genes play a role in how readily someone accepts static memes based around certain social signals (e.g. in group/out group stuff).
> For example, genes could ... but I don't think that would cause some people to be atheists.
I agree that there are ways genes could affect our brains at a lower level (like an instruction set affects CPU performance) and that this sort of effect isn't substantial.
> - temperament: Say someone has a gene that means they produce lots of some hormone. That hormone makes them angry more often / more easily.
Hormones are low level. Behaviors and emotions are high level. It's kinda like suggesting that heating a room with a CPU in it might result in video game bosses attacking more aggressively. Low level changes do not cause high level changes that have the appearance of complex design unless there's a specific causal mechanism set up to enable this (e.g. sleep or volume button on a computer).
> Does this sort of thing count as a direct influence over our personality?
You could get annoyed more when hot or cold. Does that mean heat and cold influence personality? I think how one responds to heat, cold or hormones is part of what one's personality is. But they aren't controlling your reactions. The reactions are your choice based on your ideas.
> Low level changes do not cause high level changes that have the appearance of complex design unless there's a specific causal mechanism set up to enable this
*A*: Don't we have a (rudimentary) explanation for hormones affecting thoughts, though? I know--personally--I think different things when in different moods (at least I think that's the case).
> I think how one responds to heat, cold or hormones is part of what one's personality is. But they aren't controlling your reactions. The reactions are your choice based on your ideas.
*B*: It feels like you're implying reactions are core to understanding personality, like the only way we can inspect personality is via its effect on our reactions.
I googled 'personality' and found a sensible-feeling definition about patterns of thoughts, feelings, and behaviours. Those are all based on ideas, so by that definition personality is just a collection of ideas.
I'm not sure if part A and B contradict each other. I'm not super happy with this reply but I think the result might be going to back to another part of the conversation.
PS, I labeled the paragraphs to refer to them, hopefully that made sense when reading.
> Don't we have a (rudimentary) explanation for hormones affecting thoughts, though? I know--personally--I think different things when in different moods (at least I think that's the case).
Are you linking hormones to moods? You bring up something about hormones affecting thoughts but then the next sentence doesn't mention hormones.
> It feels like you're implying reactions are core to understanding personality, like the only way we can inspect personality is via its effect on our reactions.
I don't think that and I don't see how my text implied it.
> so by that definition personality is just a collection of ideas.
I agree with that.
> Are you linking hormones to moods? You bring up something about hormones affecting thoughts but then the next sentence doesn't mention hormones.
Yes. I think most ppl presume a super tight relationship between them. That doesn't seem right--thinking about it now.
*Some* effect might be there, but that's like a transition between levels of emergence, and probably means I don't have a point here.
Going to drop this angle for the moment.
> > It feels like you're implying reactions are core to understanding personality, like the only way we can inspect personality is via its effect on our reactions.
> I don't think that and I don't see how my text implied it.
Given you agreed with "personality is just a collection of ideas" I'm not sure this is important to discuss unless you think so. I can explain why I thought the implication was there if you want.
**concluding comment**: I think I agree with you that hormones don't influence personality/thoughts in a substantial way (I think you agree with that at least).
I think at this point it's up to me to come up with some other causal mechanism? Or the only other node on my conversation tree I have to look into atm is mine about unknown causal mechanism.
> Going to drop this angle for the moment.
Do you think your made an error? If so how'd that happen?
> I can explain why I thought the implication was there if you want.
Yes I'm curious.
> I think at this point it's up to me to come up with some other causal mechanism?
That's an option. Another is I could play devil's advocate and take the other side of the matter. Another is you could ask questions or think about stuff like how reacting to a hormone differs from reacting to an event like a sick parent, winning a competition, getting a high or low grade, etc. Our emotions and moods are causally connected to all sorts of things but the basic point is the connection is governed by our ideas: we can decide how to react to a particular event and if we had different ideas we'd react differently. The hormone/genes/etc ppl are claiming roughly that something different/special is going on in their case. Having a clearer idea of what the claim is helps with evaluating it.
> Do you think your made an error? If so how'd that happen?
Yes, will do a postmortem in a different post.
> Yes I'm curious.
Cool, will also put this in a diff post because it feels off-topic.
> That's an option. Another is ...
I want to take a bit to think about where to go from here. I didn't really consider how many possibilities there were. Some of those options I might be able to follow myself (like a thought experiment) to see where they lead.
BTW, @curi, I think it was good we didn't do the Bitcoin option today. This feels (and felt) valuable even though I don't think of myself as anything close to an expert
Thought on why FI is special
I've been thinking about why FI is special/different. It's related to the general topic of FI and new ppl, their reactions, etc.
curi said in Discord:
>> [12:14 AM] Laozi Haym: it isn't anything new i need to watch what I say, I just ...was watching what I was saying 2 days ago on here
> if you're mistaken about something it's better to say it and get criticism rather than hide the error. so i generally don't like people watching what they say. and feeling pressured about it sucks too.
> sometimes people try to say only their highest quality ideas but they don't go through life using only those. most of the time they're not at their best. what you do when you're tired or distracted is part of your life, and should be exposed to criticism too.
I think part of FI being different is to do with the culture related to things like "if you're mistaken about something it's better to say it and get criticism", and "what you do when you're tired or distracted is part of your life, and should be exposed to criticism too."
When people come to FI they don't expect other parts of their life (maybe implied by things they said) to be questioned. It's doesn't adhere to normal social norms. That's--in part--b/c those ppl and normal social norms don't value stuff like: error correction, every person and discussion being a potential beginning of infinity, the capacity for ppl to make progress (esp rapidly), etc. There is some lip-service paid to these ideas, and they're taken somewhat seriously in dire situations, but they're not like culturally ubiquitous, common, or expected.
That lip-service is part of the reason pointing out those things individually doesn't work to differentiate FI; everyone says it, and everyone says they're honest. But the culture is different; what's tolerated, what's expected, what's prioritized, what things are seen as important.
Even that paragraph doesn't work outside this sort of context. I don't expect it would convince anyone who didn't already understand it (at least: understand it enough to know what I was trying to get at and whether I had mistakes, etc).
>>> I think how one responds to heat, cold or hormones is part of what one's personality is. But they aren't controlling your reactions. The reactions are your choice based on your ideas.
>> It feels like you're implying reactions are core to understanding personality, like the only way we can inspect personality is via its effect on our reactions.
>> I can explain why I thought the implication was there if you want.
> Yes I'm curious.
I think this was what I was thinking:
- response to stuff like heat is a part of one's personality
- the stimulus doesn't control your reactions
- reactions are choices based on one's ideas
- so there's a chain like: stimulus -> physiological signals -> interpretation (ideas) -> meaning (ideas) -> choice of behaviour (ideas) -> response/reactions
- personality is included in this chain only via the links that have `(ideas)`
- we can't see the ideas, but we can see the response/reactions (the outcome)
- (premise) to understand someone's personality we need things we can study / think about
- the reactions and stimulus the only parts of that we can easily agree on without like inference/explanation
- stimulus doesn't tell us about personality
- reactions and response do, though
- so reactions are key to understanding personality
Postmortem on hormones-mood link
>>>> Don't we have a (rudimentary) explanation for hormones affecting thoughts, though? I know--personally--I think different things when in different moods (at least I think that's the case).
>>> Are you linking hormones to moods? You bring up something about hormones affecting thoughts but then the next sentence doesn't mention hormones.
>> Yes. I think most ppl presume a super tight relationship between them. That doesn't seem right--thinking about it now.
>> *Some* effect might be there, but that's like a transition between levels of emergence, and probably means I don't have a point here.
>> Going to drop this angle for the moment.
> Do you think your made an error? If so how'd that happen?
I implied mood and hormones were linked. I didn't explicitly mention it.
When curi pointed out I linked them I realised that I was presuming an intimiate relationship and that I didn't have a good explanation for it.
There's a ~common idea that they're intimately linked. I think, in general, it's a good way for ppl to avoid taking responsibility for their reactions to stuff. e.g. women a more irritable on their period and so they shouldn't be held to as high standards / ppl should be more forgiving of them getting upset / etc. This is roughly called a 'mood cycle', which is explicitly linked to hormonal cycles of the same length (I've heard 28 days for women and 33 days for men).
When curi pointed out my linking hormones and moods I thought about the common idea and questioned it. I didn't question it when I first used it though. Why didn't I question it?
Intuition: In general when we're thinking about something particular there are ideas that are 'in the front' of our mind and other ideas 'in the back' of our mind. We are actively engaging with the 'front' ideas, but not the 'back' ideas. (Maybe the 'back' ideas could be called background knowledge but that term feels like it describes a slightly different thing.) To question an idea it needs to come to the 'front'. It's sort of like a module of code: we interact with the API but we don't interact with the internal logic. When ideas are at the 'front' we're looking at the internal logic and API, but at the 'back' we're only looking at the API. We use shortcuts to know how ideas at the 'back' interact with stuff.
So by that intuition: I had the hormones->mood link in the back of my mind and didn't think about the internal logic until curi brought it to the 'front' by pointing it out.
I'm a bit worried that this is just a long winded way of saying something like 'lazy thinking', but it feels like there's probably more to it, so I'm okay with it for the moment.
One of the ways I could avoid this is by categorizing old 'background' ideas (like the hormones-mood link) as stuff I need to reconsider before using. In some ways it doesn't matter much if I get ~lots better at thinking WRT 'front' ideas, but keep using bad ideas as foundations without questioning them. So I need to make a habit about questioning ideas I use as a foundation if I haven't considered them since improving my thinking. There are practical limits on this, like lots of my preexisting ideas are fine (or at least fit-for-purpose at the time) and reconsidering them consistently would be significant overhead. If I'm using ideas as part of my reasoning, though, that's a good reason to reconsider them, at least briefly.
#18185 Be careful with complex interpretations of other people. Often you should check if they agree instead of assuming you got it right. And I don't think I said stuff that corresponds to your "core" or "only way".
> Often you should check if they agree instead of assuming you got it right.
I think I was trying to do that with:
>> It feels like you're implying reactions are core to understanding personality, like the only way we can inspect personality is via its effect on our reactions.
If that wasn't clear, is there a good way to do it better? I could explicitly say "to check I have this right, are you implying ... ?". That feels cumbersome though.
#18186 ok so how would you revise your original claim:
> Don't we have a (rudimentary) explanation for hormones affecting thoughts, though? I know--personally--I think different things when in different moods (at least I think that's the case).
(you may want to grab more text/context to also revise)
#18188 "feels like" is kinda vague but generally (when there aren't clear emotions involved) reads similar to "i think". i don't read it as a question or requesting confirmation.
A question version at around the same length is:
> Are you saying reactions are core to understanding personality, like the only way we can inspect personality is via its effect on our reactions?
>> I think at this point it's up to me to come up with some other causal mechanism?
> That's an option.
I have a few ideas for casual mechanisms:
* genes encode some ideas which are 'given' to us early in life
* so there could be flow on effects
* this isn't really a _direct_ influence on thoughts, though.
* or maybe: ideas have different classes of components e.g. like ideas about 'relationships between people' are one of those possible components. if there are optimisations the brain has that directly relate to some phenotype (like volume of that brain-part) then the weighting between generation of idea-components could differ thus ppl with certain genes are more likely to think of certain stuff.
* note, after I wrote "ideas have different classes of components" I strongly questioned why I wrote that, I don't think I have a good reason. I think that is reflected in the following 2 points:
* but we don't know anything about these idea-component things
* so this 'causal mechanism' just kicking the can down the road by introducing another unknown casual relationship as part of this explanation
So I don't think I have any good ideas for casual mechanisms.
I don't think I could convince myself that genes have a direct influence over our thoughts. But I can't convince myself they *don't*, either. I can convince myself that I shouldn't believe they do.
I'm open to other ways to move the conversation forward if you have ideas.
The dotpoints above are in this hierarchy:
> * genes encode some ideas which are 'given' to us early in life
Consider a gene pool of, say, wild dogs. Using nanobots, you tinker with it. You sterilize or kill some dogs, or manufacture others, or whatever. You don't make huge changes. You just change the initial conditions. Then you leave the dogs alone for 100 generations.
Do you expect the tinkering to change the end results much? In general I don't. The selection pressures of the environment will control the results. E.g. if you make the dogs have more fur on average, but it's a warm climate, then I think they'll end up with less fur anyway.
Similarly, I don't think the initial ideas in the brain matter a lot. Make sense?
Another pov is you can build ruby on C or java foundations and have the same language. Once you add a few layers of abstraction over the initial functions/APIs/whatever, then the details of them end up not mattering (unless e.g. they were really broken or manage to cause ongoing performance issues).
> I don't think the initial ideas in the brain matter a lot.
for clarity: so you think it is possible we have ideas encoded in genes that are given to ~everyone during prenatal development (or shortly after birth, w/e)?
the idea that the *initial ideas* in the brain don't have any long term significance on our thoughts (and genes can give us some initial ideas) is a stronger and different position than I thought you had.
#18193 I had in mind a dog geneticist who just sorta screwed around a bit.
If he specifically tries to cause a specific result, and puts a bunch of creativity and scientific study into figuring out what changes will cause it, then he might manage to cause it. If he can predict the environment and what'll happen evolutionarily, he might figure out what to do to the gene pool to get a specific feature to be present 100 generations later that wouldn't be present otherwise.
Does biological evolution put that kind of major design effort into controlling high level human ideas like whether someone is an inductivist? No. It doesn't even have knowledge of those things (like induction), let alone knowledge of the whole future memetic selection pressures and evolution of ideas and creation of layers of abstraction and so on that'll happen from ages 0-25. To cause being an inductivist at age 25 would require not only knowledge of inductivist (as expressed in an appropriate framework that makes sense in our present day culture), it'd also require knowledge about that whole childhood and education process and how to manipulate and control it.
How could genes do all that? And even if they theoretically could, there were no selection pressures to cause them to do it in general. You can pick tons of ideas – like that painting is better than sculpture, or that math tests should ban calculators, or that Uber should be allowed into cities immediately despite complaints by taxi drivers – and it makes no sense that genetic evolution would have set things up to control that. Maybe you could try to come up with a few special cases and an explanation of a causal mechanism, but the standard thing is no causation like this.
#18194 I think our genes set us up with adequately powerful and generic hardware + OS + maybe some initial default apps that are replaceable. I don't think these end up mattering that much cuz of choice, abstraction layers, and universality – no missing features/capabilities. As far as their ability to bias us in a particular direction (as in variance in these things between people could make some people more mathy and others more artsty, or some people more angry and some more calm), while it's not exactly zero, I think it's tiny compared to how much culture and childhood and thinking about stuff matters. It's just a drop in the ocean. (This is also DD's position btw.)
> stronger and different position than I thought you had.
What did you think I thought and what's the difference?
#18196 And I don't think the variance between people is anything like intel vs ARM chips or windows vs. linux OS. Even that isn't such a huge deal, but genes created a particular hardware and OS design and variance is limited to be more minor and not break things. Variance isn't gonna be so huge as to create a drastically different design.
> What sort of effect or influence do you have in mind, via what causal mechanisms?
I'm not sure about the causal mechanism, just that this is *an* effect and it's argued that it happened via evolution at the gene-level.
I think I might have some counterexample to the idea that genes don't play a significant role in thoughts. It's part of a bigger idea, though. I'll try and outline relevant parts of the video.
(I've bolded the key phrases)
- Lindybeige has a **theory on why women have breasts**
- He **explains why other theories aren't sufficient** (e.g. there's one idea that women have breasts to signal fertility, etc, and that theory compares humans to other animals like primates; this is refuted b/c other species have no *permanent* signs of fertility)
- There's a bit about the **EEA (Environment of Evolutionary Adaptedness) and evolutionary context** / selection pressures / social dynamics at the time (social dynamics here means like 'dynamics of hunter gatherer society')
- There's a (conjectured) **chain of reasoning and events** he goes through in early (modern) homo sapien development involving **secret menstruation and how sexes would 'react' for evolutionary advantage**
- part of that conjecture is **male reaction to sexual signals ~flipping** to avoid being unattracted to fertile women
- and this eventually ends with women having permanent breasts
It's that second to last part about male reaction ~flipping that I think might be a counter example.
The video: https://www.youtube.com/watch?v=oWkOvakd9Mo
The reason I think it's a counter example is that this would be a way genes significantly changed thoughts. (assuming ideas like 'she's attractive' and 'she's not attractive' fit the bill for what we're considering.)
> - part of that conjecture is **male reaction to sexual signals ~flipping** to avoid being unattracted to fertile women
The idea of ~flipping is roughly:
- animals are attracted to symbols like swollen breasts / butt, particular inflammations, temporary colouring, etc.
- animals (all but humans) don't have breasts when they don't need them. They only grow them when necessary, and they're not swollen at other times
- modern women have ~swollen breasts *all* the time (there's some difference between lactating/not lactating but it's minor compared to other animals)
-- maintaining breasts costs resources, there's an evolutionary reason not to do it
- the male reaction to swollen breasts is to *not* be attracted b/c it means the female isn't fertile (this is true in other animals)
- human males around the time women developed permanent breasts had this reaction too (along with other things like fatter -> good -> more resources / better chance of children surviving)
- one evolutionary reaction could have been to like fix the 'pattern' for what males found attractive (e.g. breasts -> good now, fatter -> still good)
- but the *simplest* change necessary is just a binary 'not' - i.e. things that weren't attractive now are, and things that were attractive aren't
-- admittedly (thinking about it now) why didn't humans die out because malnourished women were selected over non-malnourished?
- so males had this gene flipped by evolution and breasts were attractive now
This sounds like a way genes had (and have) a significant role in thoughts.
Possible criticism: this is just an idea we get when we're young and some people change it, some don't, but it doesn't mean genes have a *substantial* role in affecting thoughts, just that like this one inborn(?) idea is different.
I marked inborn with a (?) because I'm not sure I'm using it right.
> The reason I think it's a counter example is that this would be a way genes significantly changed thoughts.
It's useful to think through what sorts of genetic effects on thoughts are important and why.
E.g. being tall correlates with the thought "I like basketball" or "I want to be in the NBA" at age 25.
Genes did not evolve to have knowledge of basketball or the NBA. Height genes are just about height.
The causality here is cultural. Culture reacts to (partially) genetically controlled traits like height.
Similarly, culture has some reactions to e.g. hair and eye color, which genes have substantial control over (barring bleach, dye, colored contacts, etc).
#18200 So once upon a time humans were animals. Apes or something. Not yet intelligent. And they had behaviors controlled by genes just like cats do.
Did humans get permanent breasts then or later (after intelligence)? I'm not clear on the claim/story yet.
Anyway, later, humans become human/intelligent. Then they have memes. And memes start taking over control of lots of stuff including sexual preferences, courtship behaviors, etc. Memes evolve faster than genes and have access to better control over adult humans – ideas are in a better position to effect behavior than protein design at ~birth is.
If humans evolved permanent breasts before memes, there's no real issue, right?
If humans evolved permanent breasts after memes, that'd be more complicated. Does Lindybeige claim or address that?
#18200 Overall, you or we could go into more detail on this example, but maybe you'd be content to consider in enough of an unknown, with lots of uncertainty, that it's no reason to reject a model of how intelligence/minds/genes/etc work. I don't see that it's very important to look into this particular example more.
> If humans evolved permanent breasts before memes, there's no real issue, right?
> If humans evolved permanent breasts after memes, that'd be more complicated. Does Lindybeige claim or address that?
I can't find a reference to dates more specific than ~last 2.5 million years (the Pleistocene). If he did mention a more specific date I don't recall it and can't find it via some quick searches.
> maybe you'd be content to consider in enough of an unknown, with lots of uncertainty, that it's no reason to reject a model of how intelligence/minds/genes/etc work. I don't see that it's very important to look into this particular example more.
Yeah, I'm content to do that. It's not clear it's a counter example (and even if it were there are lots of issues/unknowns still)
The discussion about genes and intelligence above is discussed and written in:
Conversation tree so far (recent entries less refined than older ones): https://maxkaye.s3.amazonaws.com/2020-09-28-curi-genes-int-tree-exported-2020-10-05.pdf
Tangent near the end of a patio11 thread:
> Lots of cryptocurrency projects think that there is a way for any part of their ecosystem to be done by non-professionals in the long-run and they are all fools.
> Miners, devs, promoters, capital, etc, will all be professionalized.
#18280 I agree. There's a lot of naivety around 'decentralisation'.
There are caveats, though. Increasing the accessibility of some previously professionalised thing (e.g. arbitrage) can result in more people do it--and at lower volumes. But, in those cases, the professionalisation is just moving from the person doing the thing to programmer(s) maintaining the feature.
(Tutoring Max #49) There are no conflicts of interest between rational men.
Talking with curi during Tutoring Max #49
Topic: There are no conflicts of interest between rational men.
## rough brainstorming
idea seems to be
- if people want to do good / make progress / improve something
- then that has to be compatible with objective reality
- reality is such that we can't choose the right path to make progress
- rational people will focus on a goal (which is not doing harm to someone particularly)
- and the method to get that goal has to be compatible with objective reality
... (idea feels unclear so I'm swapping brainstorming topic)
- 'winner' pays 'loser'?
- auction -> one person no longer wants the job?
What is the scenario, what is the conflict, and why is it not fixable?
Alice and Bob both want a particular job. They are both suitable applicants. There's only one job, so at most only one of Alice,Bob can get the job.
Alice/Bob are competing for a scarce resource. They might think that their life would be worse if they didn't get the job.
There are ways to fix it by introducing e.g. another position like the first, but is it fixable without introducing stuff?
Alice/Bob could talk and one could persuade the other it'd be better not to have it.
Fixableness has a time constraint -- knowing a solution might be available in the future doesn't help the problem now.
So for it to be 'fixable' we'd need a solution that generally applies to all situations like this, and we need to be able to apply the solution right away.
#18320 It's important to think about scenarios in reality. Say the business owner, Joe, wants to interview both Alice and Bob, and then wants to hire Alice not Bob.
In what scenario does Bob get the job? What series of events? What exactly does Bob want to be different (or or in the recent past) and by what means would that change be achieved?
> It's important to think about scenarios in reality. Say the business owner, Joe, wants to interview both Alice and Bob, and then wants to hire Alice not Bob.
This sounds like a situation where, if Bob knew Joe's thoughts, he shouldn't want the job. If Joe's already made up their mind, wouldn't that be a reason for Bob to spend efforts on other opportunities?
> In what scenario does Bob get the job? What series of events? What exactly does Bob want to be different (or or in the recent past) and by what means would that change be achieved?
Bob get's the job if Joe changes their mind, or Alice finds another job (or otherwise withdraws).
Joe might change his mind if s/he finds out something bad about Alice, or if it turns out Joe's idea of Alice was wrong. There could be lots of ways that happens, but it's not something that can be relied upon. Joe might also learn something new about Bob.
Generally it seems like either Joe or Alice would need to change their mind or learn something new for things to end up with Bob getting the job.
Bob wants Joe's opinion to change (the opinion that Alice is the better one to hire). Bob could do a really good interview and persuade Joe -- or something like the above could happen.
I guess something unexpected could happen too (like Alice getting hit by a bus) but I don't think Bob wants that so it seems pointless to expand on.
#18322 It’s in both job seeker’s interests that jobs are given out according to the capitalist system where the business owner or his proxy decides who to hire. If he hires Alice, there’s no way Bob could have that job other than if a different system were in place. But that system would make everyone much worse off including Bob because it’d involve limitations on freedom, government meddling in the economy, pointing guns at people to get jobs from them, or something else bad.
People commonly have mutual interest that something is decided by a certain method which has good traits like being fair, free or rights-respecting. That a particular outcome goes against me doesn’t mean it’s in my interest to change the system itself. With capitalist hiring, I’m much better off applying for some other jobs than living in a society without a capitalist economy.
If Joe is bad at hiring, that may be bad for me. I may get a worse result. But it's bad for him too. This isn't a conflict between me and Joe. He's trying to deal with life and hiring well. If he's doing it poorly, that's due to ignorance, lack of skill, etc., not due to what benefits Joe and what benefits me being in conflict.
> It’s in both job seeker’s interests that jobs are given out according to the capitalist system where the business owner or his proxy decides who to hire. If he hires Alice, there’s no way Bob could have that job other than if a different system were in place. But that system would make everyone much worse off including Bob because it’d involve limitations on freedom, government meddling in the economy, pointing guns at people to get jobs from them, or something else bad.
> If he hires Alice, there’s no way Bob could have that job other than if a different system were in place.
One way Bob could have the job is if Joe had better ideas -- in the case Joe has mistakes in his thinking. That seems like it'd be compatible with the same system. If we're presuming Joe is rational, isn't that a somewhat high bar? I'm not sure everyone could measure up to it.
> But that system would make everyone much worse off [...]
I agree for lots of these possibilities. Systems that use violence to enforce rules on this sort of thing would be bad.
> People commonly have mutual interest that something is decided by a certain method which has good traits like being fair, free or rights-respecting. That a particular outcome goes against me doesn’t mean it’s in my interest to change the system itself. With capitalist hiring, I’m much better off applying for some other jobs than living in a society without a capitalist economy.
* this sounds like approximately: principles trump circumstance
* * it's better to be working within a good system than profiting in the short term from a bad system, even if a circumstantial outcome is superficially less good for you.
I agree with: a world with short term good outcomes from a bad system is worse than a world with a good system.
Do you think there are any other methods by which jobs could be handed out? Does Joe having better ideas count as another method?
> If Joe is bad at hiring, that may be bad for me. I may get a worse result. But it's bad for him too. This isn't a conflict between me and Joe. He's trying to deal with life and hiring well. If he's doing it poorly, that's due to ignorance, lack of skill, etc., not due to what benefits Joe and what benefits me being in conflict.
Okay, I see how this answers the idea that Joe's ideas have something to do with a conflict of interests. It'd be in both your interests for Joe to be better at hiring if he was bad at it. But Joe can't magically get better. So Joe just is what he is in that role. It's better he make a free choice than be coerced or something. So any alternative system that coerces him is worse, and in any system where he has a free choice he'd act roughly the same anyway.
> One way Bob could have the job is if Joe had better ideas -- in the case Joe has mistakes in his thinking. That seems like it'd be compatible with the same system. If we're presuming Joe is rational, isn't that a somewhat high bar? I'm not sure everyone could measure up to it.
Yes but having better ideas is also in Joe’s interest. The problem here is that good ideas are hard to come by and people aren’t perfect, not that Joe prefers bad idea. So it’s not a conflict of interest. I also commented on this in #18324 which I don’t think you saw yet.
>> But that system would make everyone much worse off [...]
> I agree for lots of these possibilities. Systems that use violence to enforce rules on this sort of thing would be bad.
>> People commonly have mutual interest that something is decided by a certain method which has good traits like being fair, free or rights-respecting. That a particular outcome goes against me doesn’t mean it’s in my interest to change the system itself. With capitalist hiring, I’m much better off applying for some other jobs than living in a society without a capitalist economy.
> * this sounds like approximately: principles trump circumstance
> * * it's better to be working within a good system than profiting in the short term from a bad system, even if a circumstantial outcome is superficially less good for you.
How would Joe profit from a bad system?
If the system is e.g. you use bribes to get a job, then maybe he'd get this particular job (or maybe Candice or Dillon would get it, who knows). But he'd certainly run into the problem of "someone beat me out for the job I wanted" in a bribery-based system.
It's the same with a system of favors and friendships. It's hard for Bob to know he's the best connected applicant this time, even if he knows he has a stronger social network than Alice. And even if he would have gotten this job under that system, he'd miss out on others. It wouldn't solve the problem of Bob not getting every job he applies for.
Bob, if he's bitter, may not understand the purpose of having job applications. Why have more than one person apply for a job opening that available only to one person? The point is to try to use some objective tests to find a good candidate. If Bob doesn't want that to happen, then he's giving up on earning jobs by merit as a lifestyle. And he's imagining a world where, what, only one person is allowed to apply for each job? What's that even mean? The King just tells you what job you can have? Or first come first serve?
> I agree with: a world with short term good outcomes from a bad system is worse than a world with a good system.
I doubt that any of the general purpose systems like "bribery" or "favors" for assigning jobs actually would offer Bob all the jobs he wants. They might well fail to give Bob this particular job. They might well not only deny Bob this job but make it much harder for him to find an alternative one.
But those are generic, principled systems, even if the principles suck. What about a biased system? What about a system where Bob is in charge of everything? Would *that* be in Bob's interests? Should people want to be a king?
> Do you think there are any other methods by which jobs could be handed out? Does Joe having better ideas count as another method?
I don't know a better system than capitalism/freedom/property-rights/etc.
>> I agree with: a world with short term good outcomes from a bad system is worse than a world with a good system.
> I doubt that any of the general purpose systems like "bribery" or "favors" for assigning jobs actually would offer Bob all the jobs he wants. They might well fail to give Bob this particular job. They might well not only deny Bob this job but make it much harder for him to find an alternative one.
> But those are generic, principled systems, even if the principles suck. What about a biased system? What about a system where Bob is in charge of everything? Would *that* be in Bob's interests? Should people want to be a king?
I think it's rational to want systems which can be agreed upon by everyone. Sort of like a 'lowest common denominator'. I don't think rational people want a system that's unfair--like Bob being in charge of everything.
I don't think people should want to be a king. One reason is that if I wanted to be a king, and was willing to do necessary things to achieve that, then I should expect other people to do so too. That just ends in violence, etc. Another reason is that if we were all kings it would be like having a billion city states, which would suck b/c we'd end up like subsisting.
There are reasons based on principles too, like being a king means using force to get your way, which is bad. But not everyone agrees on those. I think people more generally agree on practical stuff like 'if we all did that we'd all have nothing'. That's why I chose to write the two practical reasons.
>> Do you think there are any other methods by which jobs could be handed out? Does Joe having better ideas count as another method?
> I don't know a better system than capitalism/freedom/property-rights/etc.
I guess *all* other systems have to be better or worse than that. There's no orthogonal direction. I'm unsure if there are things to consider other than what we already did: stuff that looks like a conflict but isn't (e.g. Joe's ideas), and alternate systems for distributing jobs.
#18328 If Bob wants to be King, then he isn’t concerned with mutual benefit. He’s creating conflicts of interests by pursuing policies to benefit himself at the expense of others. This will result in rebellion. It gives people incentive to kill, exile or imprison Bob. It gives people incentive to work against Bob, undermine him, and make his life harder. This is actually worse for Bob than peaceful, harmonious capitalism would be.
And if Bob is to be King, how will he achieve it? A violent revolution in which he might perish or be betrayed by one of his lieutenants who wishes to be King himself?
And if Bob is already King, how does he stay in power? Secret police? Dictators often die. It’s a risky job. And if one has the skill/luck/capability to win the contest for dictator, why not put those same energies into a business instead? Bob could have been better off as a billionaire than a dictator. In general, even when crime pays, it pays less than the market rate for all the work/skill/risk it takes. Because it’s easier to make a profit when you collaborate with people than when you fight with them. It’s easier to profit when other people’s actions are helping you and making you more successful than when their actions are working against you and subtracting from your success.
And being a violent dictator or criminal leader requires rationalizing that to yourself and thus alienates you from reason and good ideas.
> Another reason is that if we were all kings it would be like having a billion city states, which would suck b/c we'd end up like subsisting.
I forgot to mention: the other option with all of us being kings is basically capitalism/freedom/property-rights/etc, anyway.
I'm stuck on something. It's like there are two ideas that feel circular but the oppose each other.
I'm worried there's like a tautology / circular reasoning b/c of the 'rational men' thing. Wouldn't rational men always agree on things (eventually) anyway? So the system doesn't have anything to do with the lack of conflict. But people often aren't rational, so doesn't that mean there might be a system which is better than capitalism?
self-commentary: saying *people aren't rational -> there could be something better than capitalism* is circular b/c the idea of something being better than capitalism was the reason for saying ppl aren't rational.
(note: I'm not really sure this is circular but I'm getting too hung up on it)
#18331 I don’t think the “rational” qualifier is required anymore than the sometimes-used “long term interests” qualifier. It’s in people’s best interests to be rational and to consider their long term interests not merely the short term.
The liberal claims re harmony of interests don’t rely on unlimited knowledge. They are not like “if men knew everything, there’d be harmony”. They are about avoiding conflict now. Understanding why you shouldn’t hate competitors for a job is achievable today given currently available knowledge.
> #18331 I don’t think the “rational” qualifier is required anymore than the sometimes-used “long term interests” qualifier. It’s in people’s best interests to be rational and to consider their long term interests not merely the short term.
Yeah okay. The next thing I started thinking about was whether there was a conflict of interests between ppl who try to be rational but aren't perfect.
I'm not sure bringing systems in to the discussion is necessary to make the main point. Like: if you pursue rational choices then there aren't any deal-breaking conflicts you have with anyone else who pursues rational choices. That seems fairly self-evident.
> The liberal claims re harmony of interests don’t rely on unlimited knowledge. They are not like “if men knew everything, there’d be harmony”. They are about avoiding conflict now. Understanding why you shouldn’t hate competitors for a job is achievable today given currently available knowledge.
Hmm, maybe systems are necessary to bring in to it. Like if two people are pursuing rational choices but think there's a conflict, then there needs to be some rules by which they evaluate the situation. The system is like the equilibrium everyone can agree on, and since there's only one: it's special.
I'm not sure I'm properly understanding it, though.
#18333 Can you come up with some other scenarios, besides competing job applicants, with some sort of apparent conflict of interest?
Liberalism/capitalism allows people to live in a commune and share stuff if they want to. There are many rival ideas about the best ways to live in a peaceful world but those are sub-types of liberalism. The standard terminology is that liberalism is the system of peace and freedom, and its rivals are the systems that reject peace and social harmony in some way.
> Can you come up with some other scenarios, besides competing job applicants, with some sort of apparent conflict of interest?
* one banana tree but two hungry people (and not enough bananas)
* multiple candidates running in the same election
* rich guy in a suit walking past drowning person (I'm not sure about this one)
* limited edition consumer goods
* competing for entry into a tournament (like the tetris world cup where the top 50 ppl go through)
* two kids who want particular gifts but their parents don't have enough money for both gifts
#18336 OK and can you provide solutions to those? Why isn't each one a conflict of interest?
What do you now think of these scenarios? Got some solutions re them being potential conflicts of interest?
- We both want the same diamond.
- We both want the same computer.
- We both want to marry the same woman.
- We both want the same slot on the manned mission to the moon.
- We both want to be President (of the same country).
- We both want to be the top commander of the army.
- I want to speak my mind but you don’t like what I have to say and would prefer I shut up.
- I want to kiss you but you don’t want to kiss me.
- I sell printers and you sell printers and we’re competing for customers.
try working on a discussion tree re conflicts of interest. you don’t have to include everything. you can pick important parts or paraphrase stuff if you want. or go through and do the entire discussion text. it’s up to you what you think would be useful.
Initial answers to some conflicts of interest questions (TM#49)
also posted to: https://xertrov.github.io/fi/posts/2020-10-18-notes-on-conflicts-of-interest/
Can all these be resolved?
> - We both want the same diamond.
> - We both want the same computer.
> - We both want to marry the same woman.
> - We both want the same slot on the manned mission to the moon.
> - We both want to be President (of the same country).
> - We both want to be the top commander of the army.
> - I want to speak my mind but you don’t like what I have to say and would prefer I shut up.
> - I want to kiss you but you don’t want to kiss me.
> - I sell printers and you sell printers and we’re competing for customers.
Conflicts of interest (CoIs) seem to exist sometimes. When considering rational ppl or trying-to-be-rational ppl, those conflicts don't actually exist--they're illusions which can be resolved. They look like conflicts because we're ignoring the bigger picture. Ppl involved in the CoI shouldn't want to 'win' via a system which use force to get an outcome. They should want a system that's fair and works generally. A system with universality.
Systems which use force or unwritten rules are not preferable to free-market situations b/c they have adverse consequences outside of one's control (e.g. violence, 'winners' being decided by something like physical attractiveness or social status, etc). They outcomes -- when decided with alternative systems -- are worse for ppl involved. Reasons include: bad distribution of resources, outcomes being based on perceived problems that a person can't solve (e.g. not handsome enough), harm being done (e.g. violence), etc.
## We both want the same diamond.
Expansion of situation: we are both in a shop buying an engagement ring for our respective soon-to-be fiancées, and want the same diamond (diamond-A).
1. The initial 'solution' is that the shop sells diamond-A to whomever asks for it first. Person-A gets it. This is okay because both ppl can agree to a first-come-first-serve model (which is typical and expected).
2. Maybe person-B *really* wants the diamond. They can offer to buy it from person-A. This is okay because it's consensual trade where both ppl are better off.
3. Say person-A says they want to buy it but hasn't paid, but person-B has the cash now. The shop could work on a first-come-first-serve basis where the transaction is the important moment (who can pay first), so person-B gets it. this is an agreeable system.
4. Maybe there is another diamond (diamond-B) that one of the ppl is happy with, so person-A gets diamond-A, person-B gets diamond-B.
in each case an alternative system of distribution (based on attractive looks, or social status, or bribes, or whatever) is not preferable -- it's a worse society to live in.
## We both want the same computer.
Say it's a rare old computer so there's only one of them and it's not fungible. We can agree on a system which is fair, like an auction, and proceed on that basis.
## We both want to marry the same woman.
She should choose who she wants to be with (if either of us). We shouldn't want to be with someone who doesn't want to be with us (that would be bad for both me and her). We should both want her to be able to consider both of us. If I had an advantage (e.g. knew her earlier) and tried to stop her meeting you b/c I thought she'd prefer you, then it means I have to keep that effort up WRT you and any one else she might meet. So eventually I'd need to be coercive or forceful to do that. Hurting the person you want to marry is a shit thing to do (and a bad way to live long term), so I shouldn't want to prevent her evaluating other potential partners. I should actually be in favour of that because it means problems are apparent sooner rather than later. Living in a relationship where big problems *will* occur and that can't be resolved (e.g. she changes her mind about wanting to marry me) is bad for me, so if there will be problems I should want to know about them as soon as possible.
## We both want the same slot on the manned mission to the moon.
Say there are 3 crew slots and 2 crew members have been decided and are better candidates than us (at least for those slots, like the other crew have skills we don't).
### notes on alternatives to free-market / merit based judgement
- We shouldn't want to be chosen if that would jeopardise the mission -- it being successful is more important. we can agree that the most qualified person should be chosen, or the person otherwise chosen s.t. the mission has the greatest chance of success. Maybe we're equally qualified, though.
- We don't want a system where one of us is harmed (e.g. I hurt your family to keep you out of the mission). If I wanted that it could mean my family (or me) is hurt, which I don't want.
- We don't want the mission to be jeopardised for political reasons (or other parochial stuff), so we should be in favour of a selection criteria which is publicly and politically defensible (and just).
- We don't want a system where one of us is prevented from doing stuff in the future like other moon missions.
- We don't want a system where NASA (or whomever) regrets their decision (e.g. because it was made via nepotism or whatever).
- We don't want a system where we hate each-other because that could mean we can't be on the same future mission or otherwise end up excluded from other stuff.
- We can agree on a system based on merit
- We can agree on a system where NASA maintain a suitable body of astronauts (like a minimum number of astronauts kept in reserve), so some rotation is necessary (maybe one of us went on the last mission so the other should go on this one)
-- We can also agree on a system which takes into account future rotations, e.g. flip a coin and one of us goes on this one, and the other goes on the next mission
- We can agree on a system that doesn't bias one of us for external reasons like social status (if that happened, all missions would be worse off and have a lower chance of success)
operating under these sorts of systems is preferable to winning the slot under a different system. if it was some different system then how could we be confident that our crew is the best crew possible?
## We both want to be President (of the same country).
Note: curi and I sort of started discussing this at the end of *Tutoring Max #49*.
We should both be in favour of a good system for selecting a president. We can agree on important features such a system should have, like not favouring one of us. We should want a system where the victory conditions are clear and compatible with our values. We should want a system where we could lose b/c it's possible the other person is a better choice regardless of what we believe.
The conflict only exists when we have bad, irrational systems for choosing a president. If the system is bad then we can both agree changing the system is more important (and subsequently find a system which satisfies both our goals).
If there are other candidates, we should prefer those candidates who will institute a better system to those who won't. If there are perverse mechanics in the selection system (e.g. like those in first-past-the-post when you have 2 similar candidates running s.t. it *decreases* the chance of a favourable outcome) then we should both be in favour of cooperating to maximise the chance of one of us winning over bad candidates. We can find such a system.
We could also run a pre-election or something to decide which of us runs in the main election (similar to primaries in USA).
## mid exercise reflection
I worry that I'm missing something. Are these adequate answers? Do any of the apparent conflicts persist after what I've written?
I think these are hard problems to write about -- in some ways -- b/c there are always unknown and unspecified details which could be chosen to make the situation a 'nightmare situation' (as curi put it in TM#49).
Going to have a think and maybe come back to this later.
Some thoughts on good/bad error msgs. I think they're important. I found a surprise overlap with *helping the best ppl or helping the masses*
context: I'm thinking about error feedback and how it affects groups of people / group efforts in a general sense, and I'm also thinking about the sorts of error msgs programmers get in the regular course of programming and how it specifically helps/hurts software projects. The thoughts below are a mix.
* are error messages a good way to organise issues? (e.g. in software dev).
* they have an important role: they guide ppl who know less (than the developers, or other community members, etc).
* if error msgs were a bad way to organise issues then there must be a better alternative system. what would such a system be like relatively speaking?
- it would put more burden on the ppl affected by errors b/c it's harder to know/learn how to report and solve the errors
- it would mean responsibility for the quality of error reporting would be shifted towards the shoulders of newbies
- such an alternative system would treat the relevant (preventable!) errors less seriously
* why could that be good?
- it'd mean there was a higher bar for engaging with top tier ppl
- it filters out ppl who are not able to understand the problem at least enough to figure out how to begin to deal with it
- if the best ppl don't know how to prevent relevant errors then isn't it better for them to focus on solving those problems rather than helping ppl who aren't as valuable?
* why could it be bad?
- higher bar to error correction -> less error correction
- easy to discourage ppl and end up reinforcing static memes / driving ppl away
- if the best ppl didn't know how to prevent the relevant errors then they end up working on the problem anyway; makes sense that there's an equilibrium here; after all, ppl are voluntarily participating on both sides.
- helping the best ppl or helping the masses
- error msgs and ~responsibility of senior members
- there is no one constant set of behaviours that makes sense WRT helping the best ppl vs masses, what matters is context. is it a good time to help one or the other? if lots of ppl have really bad ideas then it's probably worth helping the best ppl -- so we can find a good soln to that problem. conversely, if we don't have any great ppl at that time, or are otherwise short of *great* opportunities, then there's more utility helping the masses. there needs to be fertiliser for future generations, but also nourishment for current ppl in their prime. those *great* opportunities can be vicarious, ofc. Man's first journey to the Moon was a journey shared by a Nation.
- there's a big question raised by this: how should we react to *learning* of a great opportunity?
Finishing up: what happens if someone goes to an effort to make error msgs as good as possible?
- organisation gets better b/c the error messages are better suited to the associated errors
- it gets easier for ppl to help with / do error correction b/c the msgs/explanations match the contextually best ideas more closely and are more reliable to reason about.
- exponential/geometric increase in effectiveness of relevant key ppl -- their time can be better allocated, delegation gets easier, etc
- mutually beneficial for all parties. (note: this relies on the ability to improve error msgs and the right ~economic context to make it the easy choice. OTH I think that's reasonably common. most non-optimum situations don't hurt much and can be easily controlled via the 2nd-derivative (~acceleration). if there's a bit too much work on good error msgs then you can just reduce the hours per week by 10%; it can be gentle without much harm. the harm I mention here is wasted resources in a generic sense.)
## clarifying stuff
I didn't put a huge amount of thought into particular word choices because they felt difficult and I didn't want to ruin the flow. Here are some clarifications:
- *responsibility* as in *~responsibility of senior members*: i don't mean anything like an obligation, but if there was a clear moral decision then it'd line up with that.
- *2nd-derivative (~acceleration)*: controlling the rate-of-rate-of-change is useful if you want to control the outcomes of some (simple enough) system, and acceleration is a reasonably common way of talking about that.
I edited the OP to add the Max tutoring playlist link https://www.youtube.com/playlist?list=PLKx6lO5RmaetREa9-jt2T-qX9XO2SD0l2
some thoughts on project planning - Max
Context: project planning differences between doing projects yourself vs with a team
When I do projects myself, I work with a pseudo-JIT planning method. For the most part, the way I do prioritisation is based on immediate dependencies. I can also change focus with low overhead (like work on UI a bit, then backend, then UI, then backend)
Team projects are different. A lot more of the dependency graph needs to be defined upfront. There's a large overhead in transferring knowledge and changing who's working on what.
Does this difference matter for project planning? I suspect the methods to avoid fooling oneself mean the outcome is fairly similar. Like my naive method of JIT prioritisation leaves a lot of room to work on things that aren't that important -- to fool oneself.
(goal, context) pair
This is like a (goal, context) pair. it's not necessarily specific, like there's a lot that's implied or possibly known from wider context, but in essence it's a GC.
I think the idea of a partial IGC is nice and makes some sense. like the idea of 'when you've got a hammer every problem is a nail' is expressing an IC pair; there's a method and a context but no idea besides using the hammer like normal.
harmony of interests - don't forget about unknowns
continuing harmony of interest stuff:
In the last tutorial (50?) I think curi and I talked about some harmony of interests cases.
I was playing the start of breath of the wild -- the old guy offers to trade you something if you to go into a dungeon and get the treasure. he then reneges on the offer but says he'll do it if you do 3/4 more dungeons.
why wouldn't it be in Link's or old-guy's interest to be violent? (besides the fact it's a game, etc)
with an alternate system, like violence / forcible redistribution, you can get hurt a lot more by unknowns. say old guy tries to jump Link and steal the treasures after he finishes all the dungeon. Well, if old guy didn't know any better then he'd like murder Hyrule's saviour.
This is pretty extreme as far as examples go, but I suspect there's a general principle. DD's morality conjecture seems like a good foundation (the conjecture being: the only moral imperative is not to destroy the methods of correcting mistakes. BoI - a dream of socrates). The idea of all parties agreeing on the methods of distribution / conflict resolution (and being rational, thus mostly honest) means none of the parties are aware of mistakes unique to that method.
#18525 yeah you can brainstorm solutions to a goal/problem (or set of similar goals) in a context. you generally want a goal/problem before a solution. solutions/ideas in search of problems doesn't work well.
in-progress idea on (in)dependent status of variables as yes/no property of explanations [Max]
at the end of tutoring max 51 I mentioned a partial idea I'd had about dependent/independent variables (or factors) in an explanation and whether we could like test for it. Since it's a yes/no property, if we could test for it then we might be able to use ideas around this to refute some class of explanations. IDK whether that will pan out, needs more work as per end of TM51.
(the following is unedited)
PS. I have been thinking about the titles of comments; particularly whether I should include my name when I post to max-microblogging. My RSS reader (a chrome addon atm) shows one big list and doesn't separate curi's posts from comments with titles (at least by default). If anyone else has a similar set up then adding my name to the title will make it easier to filter post/comments. IDK if I'll keep doing it. Feedback welcome.
(it doesn't bother me that the RSS reader doesn't separate posts and comments. I'm trying to read everything atm, though I do have like 100-200 unreads atm. Making some steady progress working through the backlog tho.)
is dependent/independent variable status a yes/no property of explanations? can we test for it?
inspiring thought: emulating a video game and selecting different options/hacks. are they dependent or independent? how would you test without knowing the explanations for what they do?
is there a way to test for dependence/independence of variables. e.g. when making any particular measurements?
- relies on (good?) theories of measurement
Educated guess: dependency between variables means most of the time things should look correlated. it takes very particular measurements or to be measuring particular behaviour to have something that *looks* independent. Also, independence between variables means most of the time things should look *not* correlated, and only particular, specially chosen inputs make it look dependent.
When an explanation defines some system, the factors / components at play will have some dependence/independence relationship. whether two variables are independent will depended on the minutia of the explanation, but should be yes/no. e.g. `f(x,y) = x + y` is dependent on both x and y. we can't change that by introducing more terms or by adding square roots or other things.
Do we have a way to test for dependence/independence? Is that even useful?
- How might it be useful?
feedback request: "Why I Live" draft 1 [Max]
I'm requesting feedback for the following draft. It's not done, so I've left some of my own comments as quotes below -- just the most important stuff which came to mind.
My biggest concern is that someone might read this and feel bad about choices they've made which are actually okay. If you have ideas about how to avoid this or whether it will happen I'd like to hear them. This is written because it's how *I* see *myself*, not because it's how I think everyone else should see themselves and each other.
Some of the language is a bit fancy. I'll do more editing in future drafts to simplify it, but suggestions/crits are welcome.
The draft is as follows:
# Why I Live
Existing matters. Particularly: *our* existing matters. Existing--as a *human*--is special, unique, and full of potential. We are--or at least, can be--cosmically significant.
My choices--today and in the coming decades--must have *some* impact on humanity's future. My choices *today*, more than any other moment, have the greatest leverage in influencing the manner and magnitude of that impact. Tomorrow, tomorrow's choices will have the most leverage. If there are important choices to be made, perhaps I should pursue an understanding of them with some urgency.
Can my choices be so significant that they make an appreciable difference in the volume, quality, and proximity of important milestones in the future of humanity? That is, they contribute to a future with *more people* leading lives of *higher quality* (both std of living and std of ideas) and *achieving* important philosophical and technological *progress sooner*.
If one's choices could matter that much, how could one possibly *know* that? Humanity and the route we take into the future are, together, like a giant snooker table loaded with hyperactive billiard balls and a thousand collisions for each single choice any person ever makes. Unpredictable, chaotic, immeasurably complex. *Everyone* will have *some* choices they make that will impact our collective future in a significant way. However, most people will never know which choices were the special ones. Most people's choices--the ones that end up mattering--will be special by *luck*. Their other choices--the ones they *hoped* would matter--will end up swallowed and forgotten by the passage of time.
> I need to remember to answer the questions I set up here. Currently I don't actually answer them, though I intend to.
> The snooker table analogy is energetic (which I wanted) but doesn't flow super well and is maybe over the top or dominating the paragraph w/ it's length.
> I'm not sure if I should avoid answering 'yes' to the 'can my choices …' question. I think it might work better to answer (and explain) it later.
Imagine yourself at the divergence of your futures.
To the left is a future of hit and miss choices, erratic legacy, and the fog of cosmic uncertainty. However, you can have a straight forward life. Your choices need not carry the epistemic burdens of depth, nor urgency, nor consequence.
To the right is *the alternative*.
Not everyone can take the right path. But some people can. Some people have. Maybe I can. And maybe you can, too.
Before you think about which path to attempt, *what would it mean to choose poorly?*
What if you chose the right path but you fell short? What difference would that make compared to choosing left? *At worst*, realistically, *there would be no difference*; or at least no difference worth considering here.
What if you chose the left path, but you could have chosen *the alternative … **and succeeded?*** Would your decedents look back and think "they should have known better?" Would there even be decedents to look back? What good purpose could there be in avoiding greatness? What hope does a future of greatness-aversion have?
Can choosing mediocrity be evil? Certainly not always. Maybe sometimes.
The reality is that you are presented, not with two paths, but infinitely many paths every moment of every day. You are not constrained to a dichotomy of greatness or mediocrity. You are the beginning of a new infinity -- if you choose to be.
> I think I'm mostly okay with this section -- at least on this pass. I'm not sure about the "Can choosing mediocrity be evil?" bit.
## What About Failure?
I used to be worried about failing. "Could it be that I will spend my life in vain pursuit of greatness and progress?" If the past half decade had been different, that might have become true. I know better, now.
Contribution to the future--to *progress*--is not zero sum. No thinker can contribute to progress if they are cut off from civilization. The credit for the impact of a great thinker's choices must be shared. As a light-cone constrains the the causes and effects of an event, an epistemic idea-cone constrains the prerequisites and consequences of a great thinker and their ideas. All those people inside the idea-cone share the credit. Without shoulders to stand on, an otherwise great man is blind. Without feet to stand on his shoulders, an otherwise great man is dumb.
The acts of pursuing, supporting, and nurturing greatness are noble and honourable deeds.
> I think I will expand on this last paragraph/line. It's okay to do 1 or 2 or 3 of those things. Arguably it's hard to avoid doing all of them if you spend time improving your thinking and know about FI / CF.
> This section feels like it might be a bit out of place. I like the middle paragraph. I wrote the middle paragraph first during brainstorming and then decided it needed a section. So I picked the title and wrote the first paragraph around paragraph 2. I like the last line, but I think it needs elaboration.
What is my choice? Do I choose a future where my choices matter? Yes or no?
I do. And I will continue to.
I refuse to retreat.
> I originally had paragraph 1 in the 2nd person, which might not go so well with the vibe. Originally:
> > What is your choice? Do you choose a future where your choices matter? Yes or no?
> I'm on the fence about stuff like "I refuse to retreat". On the one hand I think it has some of the weight I want, but it feels maybe a bit too close to like a war/violence theme. That can be inspiring for ppl sometimes ("never give up, never surrender" sorta thing), but maybe there's a better thing to say.
## some leftover notes
* end game for universe / civilization / intelligence?
** dunno, but ~100 years humanity debated whether other galaxies exist
** today, there are multiple conjectures--with disparate mechanics--accounting for the origin and nature of the universe (or at least some part of these things).
** what ideas will we have in another 100 yrs? 1k yrs? 10k yrs?
*** yeah, I'm not going to worry about the endgame. There's lots of time to figure that out. There isn't much time for some other things, though.
* i want to put together a philosophy intro pack for new software teams.
** I'm worried that most ppl would react badly to it if I just got a bunch of curi's essays together.
** but I think there's significant and important stuff to learn.
#18534 I had an initial goal when I started this that I didn't mention. I intend to include the conclusion (at
the end or otherwise) that working on my writing is important and crucial. I'll integrate that on a future draft.
Apparently post-mortems are common/standard in cyber incident response (sub-branch of infosec).
thinking about it now: I followed the gitlab outage in 2017 closely since I was using it at the time. They had a good post-mortem process from memory.
> Not everyone can take the right path. But some people can.
What's the difference between people who can take the right path and people who can't?
> What if you chose the right path but you fell short? What difference would that make compared to choosing left? *At worst*, realistically, *there would be no difference*; or at least no difference worth considering here.
What does it look like to choose the right path? If it looks like spending all one's time on FI and ignoring one's spouse and kids, then that is a difference worth considering.
How do you tell which is the right path for any given choice?
It seems like you are facing a choice in your life, but it's not clear to me what that particular choice is about.
#18536 Yeah post-mortems for computer security breaches or web service outages help convince ppl u maybe know what ur doing and won't have the same thing happen to you again. They can help restore/preserve trust. I think that might be why they're common as public blog posts.
> PS. I have been thinking about the titles of comments; particularly whether I should include my name when I post to max-microblogging. My RSS reader (a chrome addon atm) shows one big list and doesn't separate curi's posts from comments with titles (at least by default). If anyone else has a similar set up then adding my name to the title will make it easier to filter post/comments. IDK if I'll keep doing it. Feedback welcome.
> (it doesn't bother me that the RSS reader doesn't separate posts and comments. I'm trying to read everything atm, though I do have like 100-200 unreads atm. Making some steady progress working through the backlog tho.)
Sounds like you should get a better reader.
Please don't put your name in comment titles just to try to hack how it shows up in a particular RSS reader that, for some reason, ignores the author field, and which may be fixed later and which maybe no one else is using. If you input bad meta data it makes stuff less future proof. It's harder to change things around because the data in some fields isn't what it's supposed to be. E.g. I might decide to display author names at the start of all or long comments, and then if you duplicate author name in title and author fields you could end up with that info twice adjacent.
Author names already works in other readers. The data is in the feed fine already. E.g.:
And there are separate feeds for posts and comments so idk why they're merged for you.
#18547 yup, good points. Will fix stuff on my end.
#18538 Thanks for feedback Anne.
> What's the difference between people who can take the right path and people who can't?
In hindsight `can` is the wrong word. I changed the verb a few times in that sentence -- initially it was 'might', i.e. "Not everyone might take the right path. But some people might." I think 'might' is better than 'can' but still not happy with the phrasing. it's still a bit diff to what I wanted to say there.
> What does it look like to choose the right path? If it looks like spending all one's time on FI and ignoring one's spouse and kids, then that is a difference worth considering.
I don't think it needs to be anything hard-and-fast like that. There are lots of options.
One way to look at it is: early life is super important for ppl b/c so many ideas and behaviours get adopted/set at that point. I think one reason I did so well at maths in highschool was that I'd been introduced to good ideas about the world being consistent, explicable, etc, and some good idea about problems, challenges, and that perseverance helps break through intuitive barriers and stuff.
Is raising a kid really well a good thing to do? Yeah. it can be super valuable. It's also a task that's not something we get for free along the way (just getting better at philosophy and learning in general won't automatically make you a good parent, effort on TCS etc still needs to be done). So progress can be made in parenting (like TCS was) that's a direct philosophical contribution. In some ways I guess that for some ppl at some point "spending all one's time on FI" can actually overlap like 80%+ on doing parenting and family stuff.
> How do you tell which is the right path for any given choice?
IDK about generally handling specific options -- well I have ideas but I'm not confident about them yet. I was trying to get at the idea that there's a quality to good paths to take through life.
> It seems like you are facing a choice in your life, but it's not clear to me what that particular choice is about.
Maybe, there are a few options for things that come to mind here, but I think that might be a bit of a distraction WRT the essence of the article. Or maybe I can look at my prioritisaiton problems as that choice -- that's what I was thinking about before I started writing the post.
reflections on 'Why I Live' draft & idea in general
Some reflections on #18534
The title ("Why I Live") is at worst dishonest and at best inaccurate. In reality my actions don't always line up with what I said in the post or the things I was planning to add in future drafts. A better title would be something like "what I want to align my life to". It was dishonest b/c I was essentially claiming to make better choices than I do. Some of my choices line up with the post, but lots don't. Not just in minor ways, but like major ways. Why procrastinate from ~everything to binge a game for like 20 hours? That's choosing the crappy path.
## context before writing
I was thinking about what curi and I had talked about in tutoring 51 + earlier stuff about goals. I realised that I hadn't really written much about my goals when curi and I discussed that topic (2 sessions mostly I think). What I'd done was closer to writing my goals down as opposed to writing about them. The 'why I live' post is closer to talking about my goals, but there's a lot of implicit stuff. I think the implicit stuff (which relates to me personally) is fine because the post was meant to be more general than about my specific goals, more like framework/context stuff that informs what goals I think are good or not and why.
I was thinking about being unhappy with not writing, and about my goals and prioritisation etc. I had some thoughts like 'why be unhappy with it, you could work towards that now', and 'write about goals instead of just writing them down'. I started brainstorming and came up with the title and stuff later.
I had some anxiety when I checked the next morning for posts on curi (because I hoped there'd be some feedback) - even before knowing whether any were in microblogging. That was unexpected and notable.
I wonder if there's some interaction going on between:
- my desire for a life with meaning / greatness
- fear of failure / mistakes (i'm not sure how much I feel this but it's a pretty common static meme)
- the idea of choosing to fail (deliberately not trying) instead of trying and failing
- my desire for lacking responsibility sometimes -- it's not persistent but I sometimes want it so much that I renege on commitments or withdraw and ignore msgs and things. That behaviour doesn't go beyond reason but is definitely to some extent.
I was also surprised when I thought that the post might be dishonest/misleading. Part of me wasn't, but part of me was because a lot of the stuff in the post (or stuff that's implied by or implicit in it) are things that I want to believe about myself.
If I believe things about myself that are better than the situation in reality actually is, does that inhibit error correction substantially? Can it hide blockers that would otherwise be more apparent and easier to fix?
I feel 'yes' is the answer for some things.
Obvs thinking e.g. I'm 50th percentile playing poker when I'm actually 35th percentile isn't going to matter much, esp compared to thinking I'm 99th percentile but actually 90th percentile (which would have potentially *big* consequences for variance, consistency, and volume of earnings).
In this case I think it might be an issue b/c thinking I make better life choices than I do (and do so more consistently) can mean I am overconfident, which could have consequences like starting big projects too early / without enough planning / etc.
Maybe, then, this 'insight' is just like another aspect of overreaching and Oism?
### 'negative' feedback?
I'd like to know if anyone thought it was low quality, overreaching, vapid, dishonest, etc.
I don't think FI ppl would have avoided telling me this in abstract, but I could see ppl not providing feedback if they thought like 'there are so many issues it'd be effort to know where to start' or the like.
I think the dynamics between curi and I might be different b/c of the tutoring sessions -- like anything that needs to be discussed could be done there. I don't know how/if this would affect something in particular, but it occurs to me now & might be relevant.
on the whole I didn't think the writing was too bad considering it was a draft. There are some things that I don't think are clear enough from what I wrote; e.g. intended audience. I wonder if I might write a less clear post if there are conflicting goals for the post. e.g. I wanted to write something for me (to spur myself on, as such), and I also wanted to write something that could become enduring (after more drafts) and would be useful or general in a philosophy-and-life type category. Maybe I'd do better by separating the two; writing at all helps me some, and I can do some of the more self-centric stuff in private journaling or in the brainstorming phase and copy those bits elsewhere s.t. they wouldn't be in the main post.
I also wonder if I'm overreaching at this stage trying to create anything that's enduring. I suspect there's not much I can do with a low enough ER s.t. it lasts decades.
Actually, there's lots I could do like that, just not stuff that I'm interested in doing. e.g. I could write a guide on how & why to tie your shoelaces could be enduring on the scale of decades. i think I might do myself a service by lowering those standards for a while. I can always revisit them later and I don't need to have them to actually produce enduring work (you can always go above your own standards).
I'm thinking of unendorsing ~everything I've written
Unendorsing ~everything would basically mean that I say: from some date (eg 3rd Nov 2020) it's not safe to assume I endorse/believe/etc anything I said publicly before that date.
It'd be like a reset on things. Partially that's b/c there are a lot of unresolved things (e.g. the FI discussion on Flux from 2017) which I think are not worth resolving from that point. There are also things I've said that I don't want to go back and try to change b/c that sounds like a lot of work. I don't have high enough confidence in what I've said to want to leave stuff about without knowing what it was.
I'd make a post on my blogs but wouldn't go to much effort outside that now. I'd point ppl to those posts when necessary and then address stuff on a case by case basis to update stuff I've said publicly.
I think this sounds like a fair thing for someone to do very infrequently, maybe only once.
I think this would be a good thing to do because I think my ideas and my self have changed substantially in the past few years and especially the past few months.
I also think that, from this point out, keeping things mostly up to date (or at least updatable) is something I could do. So doing a mass unendorsing would let me keep a higher quality library of criticism.
A mass unendorsing would also be a decent start to a library of criticism; it indicates a relevant discontinuity and means the stuff I write after that has special considerations the earlier stuff lacked.
If I intend to only do this once then I should treat it fairly seriously.
I'm looking for feedback.
Reach + parameterisation of reach
I wrote this as part of a post on reach + IGCs. This was 95% of the post at the point I copied it out; it seems better as a stand alone post.
I think reach is parameterised over infinite domains. I explain what I mean by that below. I don't know if this is right, but **I would be surprised if it were wrong**, so I would like to know if anyone disagrees.
When an explanation has reach, it means the explanation is general over many problems or situations. The exact nature of this generality is particular to the explanation.
One example of reach is theories which are time-independent and/or space-independent; maybe a theory works everywhere and at all times (at the beginning of the universe, and now, and we expect it will in 100 billion years and onwards), like general relativity or quantum theory; or maybe a theory works regardless of the time of day ('if I dial the emergency number someone will pick up, but only if people commonly have and use phones', 'leaving food on the stove will heat it, then cook it, then burn it, and then set fire to it')
Reach can be mundane and fantastical. All explanations have some level of reach.
It's hard (or impossible) to compare explanations' reach without a common phenomena to use as a basis. Does a theory of housing in post-GFC North America have more or less reach than a theory of sodium's role in the ionosphere? I don't know if that question has an answer. What about newtonian gravity and general relativity? It's reasonable to say that GR has more reach because GR explains phenomena which newtonian gravity does not, *and the reverse is not true*; GR is an explanatory superset.
Reach has a size, but it's hard to be exact besides 'zero' and 'infinite'. It doesn't really make sense for reach to be ever be completely zero (b/c the explanation would not account for anything). If an explanation has little reach then it only accounts for very specific things (it's parochial). Explanations can also have unbounded reach. Sometimes it might seem like explanations have near-unbounded reach (e.g. newtownian gravity seemed unbounded *except* for the orbit of Mercury). In reality, we can't reliably tell the difference between 'near-unbounded' and 'parochial' without a superior understanding of what's going on (which requires additional explanations).
Reach is parameterised over infinite domains. That's because those domains correspond to--a least--levels of emergence. Some domains are subsets/supersets of other domains, but they can be incomparable too. Example: natural selection has some domain like 'all life on earth with a genome' (it might be even more specific than this, though). natural selection also has some domain in time and space; we expect natural selection will work in many other contexts too, like some alien life, provided it meets certain conditions. It's possible that alien life might not meet those conditions, like hyper-advanced bioengineered AGIs. It's not clear how natural selection would work here, and we can recognise that b/c we know something about the bounds of it's reach. We can reason about what domains of reach an explanation has by exploring the explanation (thoughts, experiments, predicts, etc).
We don't care that explanations have zero reach in some domains. General Relativity doesn't really have anything to do with housing prices in the USA. So even though GR has some universal reach WRT time and space (AFAIK; galaxy rotation rates aside), it has 0 reach WRT housing prices. (It does have reach when it comes to houses themselves; you still experience gravity when building houses and when inside houses etc, and need to take it in to account.)
If an explanation has reach over *some complete domain* we say it *has universality*. I don't know if the domain needs to be infinite, but it seems like many important universalities have an infinite domain. Some universalities are special and some seem like they're not special. Some important domains of universalities are: all matter at all points in space and time, all computable programs, all programs, all real numbers, all people, all alphabets, all ideas, etc.
curi once said:
> X is a universal Y if it can do any Z that any other Y can do.
I could say: This turing machine is a universal computer because can compute any program that any other computer can compute. The universality is *all computable programs*.
Or: Paths Forward is a universal discussion methodology because it can lead to a successful conversation for all conversations that any other discussion methodology can lead to success in. (I don't know if this is actually true yet, but I suspect it is and would be surprised if it were false)
# short reflective post script
I *think* I have used terminology correctly but there are parts I'm not certain about. An example: When I talk about domains and universalities I'm reasonably confident the terms are close-to-right (or actually right / okay / clear), but wouldn't be surprised if there were minor issues. I would be surprised if there were major issues.
> The universality is *all computable programs*
I should have said "the universality is over all computable programs". There might be some other similar typos (I didn't edit much)
difference between actual and future people
I think there's a difference between people who actually exist and people who will exist.
You can harm both, and the rights and life of both matter. However, I don't think future people can be thought about as individuals. Like you can *collectively* harm future ppl, but you can't harm an *individual* future person.
This is relevant for stuff like abortion, AGI, thinking about one's choices and what to do in life.
I posted this to #low-error-rate on discord: https://discordapp.com/channels/304082867384745994/667162736970432522/774921042820464681
One of the most important skills for life is good self-judgement. Having good self-judgement means that you're able to tell when you're prone to making mistakes (and what they are) or when it's safe to be confident in doing something with a low error rate.
Learning efficiently requires a cycle of: do it once, do it consistently, do it fast/cheap/autopilot. This becomes very important because learning is an incremental process, so your learning history affects your learning future.
If your learning cycle is incomplete then you won't create a solid foundation for future knowledge. You need a solid foundation because new knowledge/skills will compound errors in the foundation. Example: if you have bad fine motor skills then you will have a lower limit on how fast you can type; errors in precise finger movements (or precise-but-slow finger movements) disproportionately affect your typing speed (compared to precise-and-fast finger movements).
Being able to judge your own learning cycle requires good self-judgement. Without good self-judgement you can't learn quickly and efficiently. Without good self-judgement you will end up making more mistakes than you would do otherwise. This error rate becomes an upper ceiling on your progress. Since error rates compound as you learn new things, it is possible to get stuck. At that point you stop being able to reliably make progress. To solve this you need to go back to earlier (more foundational) topics and complete the learning cycle.
If you plan to learn without outside assistance, you should do your best to ensure that your self-judgement is consistent enough and cheap enough for the topics you want to learn about.
Goal of the above post: I wanted to practice doing something that I could submit to #low-error-rate. I also wanted to write something which is useful for other FI ppl, and which helps me understand topics like learning and overreaching. I think I succeeded. However, this took ~50 minutes to write and edit, so I have not reliably completed step 3 of the relevant learning cycle(s).
> Example: if you have bad fine motor skills then you will have a lower limit on how fast you can type; errors in precise finger movements (or precise-but-slow finger movements) disproportionately affect your typing speed (compared to precise-and-fast finger movements).
I found an error w/ the noun phrase "errors in precise finger movements (or precise-but-slow finger movements)". It's a noun phrase b/c it's the subject of the main verb 'affect' in that clause. It sounds like I'm saying: errors in [precise or precise-but-slow] finger movements. This is because I use two incomparable nouns (those being errors in finger movements and finger movements). This error also affected the sentence later on where I say "compared to precise-and-fast finger movements". This doesn't strictly make sense b/c the subject of the clause is errors. I expect that--given this particular error--most people would do background error correction to fix the sentence or would just gloss over it.
I would fix the clause so that the subject is [errors or successes when doing X]:
Example: if you have bad fine motor skills then you will have a lower limit on how fast you can type; errors in precise finger movements and successful precise-but-slow finger movements will disproportionately affect your typing speed compared to successful precise-and-fast finger movements.
Revisiting the sentence now, I would consider replacing 'finger movements' with 'key strokes' but I think that it's not a big deal and doesn't significantly impact the post.
Some policies I am thinking of writing
- policy on social engagements
- policy on gifts and gift giving
- policy on birthdays, holidays, and celebratory events
These are all things on which I have unconventional views (some developed more recently than others). Writing said policies (and accompanying explanations) will help me make sure I'm clear about my ideas. Because those ideas are somewhat unconventional I will have errors that could be avoided if I used the corresponding traditional ideas instead. They're also low-stakes topics, which is good for practicing writing and writing policies. I should practice writing policies before trying to write important policies like a debate policy.
a danger of unconventional ideas on academia
One of the dangers of having unconventional views on academia is that you can end up in a situation where you don't produce a track record of thinking / ideas / research efforts. Academia and the associated journal-article norms is the primary method ppl use to establish a track record of serious research and having/presenting pioneering ideas.
I think rejecting uni culture is one reason I have ended up writing so little up till now; I had issues with some aspects from the start but I got more critical over a few years until I dropped out. (Tho I've been tempted twice by offers of a shortcut to postgrad stuff: first a masters of geoscience and second a PhD in compsci focusing on blockchain stuff. There are circumstances where you can do postgrad stuff without an undergrad degree -- basically if someone will vouch for and supervising you. Skipping the BS was one reason I considered it more fully.)
I tried to do some writing after dropping out but never found it easy and I wanted talk about topics beyond my writing skill. I don't think doing the normal undergrad-to-phd-to-research thing would have fixed all those issues (I'm still critical of academia), but I would have ended up with a track record at least.
I think my lack of a track record has been significantly detrimental to me more than once.
I've got some questions:
1. Is being able to tell when you're prone to making mistakes the same thing as having good self-judgment or is it a type of good self-judgment?
2. What do you mean by "Being able to judge your own learning cycle"? Judge what about your learning cycle?
I'm a bit tired writing this, so I wouldn't be surprised if there were some small mistakes, but I spent some time going over it so I'm confident there's no big mistakes.
> 1. Is being able to tell when you're prone to making mistakes the same thing as having good self-judgment or is it a type of good self-judgment?
I think it's a sub-skill of good self-judgement, sort of like a component. There are some edge-case situations though, like when you're new to something. Those situations might not need the full 'good self-judgement' skill b/c there's like no part of it you should be confident in. But in general good self-judgement is more than just being able to tell when you're prone to making mistakes.
Another way to think of it is: it's a type of *self-judgement*. having *good self-judgement* means being proficient at *a few key types of self-judgement*.
I think the next answer might help clarify things.
> 2. What do you mean by "Being able to judge your own learning cycle"? Judge what about your learning cycle?
I will answer this Q then reflect on it.
To have an efficient learning cycle *without help* means you need to be able to make 2 types of decision well.
The first type is decisions about focus. "Which things should you focus on and what stage of the learning cycle should you focus on for each?"
The second type is decisions about when to change focus, for both topics and methods.
Particularly, you want to make good decisions/judgements about when you can transition from step 1 to step 2, and from step 2 to step 3, and when you're done. If you don't make good decisions then you either move on too soon and end up with compounding errors, or you move on too late and waste time (or maybe get bored with the topic in general). I think the first problem is much more common than the second.
Doing each learning step well means you'll probably use a different method for each step. So you have to choose to stop doing method 2 and start doing method 3 at some point, and if you make a better decision there then you'll have better results w/ learning.
Both of those decision types (what to focus on and when to move on) are dependent on self-judgement. Good self-judgement means you'll be able to make better decisions. There might be other contextual factors too, but good self-judgement is *always* a factor.
Being able to make those decisions well is what I mean by "being able to judge your own learning cycle".
I considered if "Being able to judge your own learning cycle" was vague when writing/editing the post. I chose to keep it like that b/c I thought it was clear enough and b/c writing less means both finishing sooner and fewer chances for mistakes to occur.
My answer to (2) is pretty long. In part that's b/c I wanted to be extra clear but it also shows how much I left unsaid.
I think it might have been a minor mistake to omit more details when I considered if the idea was vague. I don't think it was a big mistake b/c it was easy to fix.
That said, what are the expectations we should have around mistakes that arise due to audience mismatch? It's really hard to write for super broad audiences, so I should expect some background miscommunication with like ~most audiences, right?
I think I need to understand more about audiences, miscommunication, and stuff about expected (or acceptable) errors and their properties (like their magnitude and type).
What question(s) do you have for us? Are you asking if we think this is low error rate? If so, what kinds of errors are you interested in knowing about?
Brief thoughts on different types of errors (learning / overreaching)
(note: this is a draft I wrote yesterday. I think I was intending on adding more. That said, it's a complete idea as it is and relevant to my reply to #18593, so I decided to post it now.)
wrt learning and overreaching: sometimes we'd be surprised by some errors but not others. I'm not surprised by subtle grammar mistakes in stuff I write, but usually the topic isn't about grammar. So I can sometimes be confident of a low error rate in the content but not in the grammar. If there's a difference in my expectations on which errors would be surprising, I should say so and commit before errors are found.
Note: I wrote this faster than I might otherwise b/c I'm short on time today.
> What question(s) do you have for us?
This is a good question. I don't have an answer ready. (BTW Anne, thanks for #18587)
Here are some questions for you and FI ppl generally I put together while writing the rest of this post:
- Do you think #18579 contains any significant errors? Does my self-judgement idea make sense and seem reasonable?
- Is the self-judgment idea important enough for ppl to read/know?
- Are there any new ideas in #18579 or is it all derivative of stuff curi has said? I'm not sure about this one (partially b/c I haven't read everything curi's written).
Lesser or more general qs:
- Does this overlap significantly with any of curi's previous posts on learning? I searched for "self-judgement" but didn't find much.
- Has curi written a post with the 3 steps of learning? I had them noted in one of my early tutorial notes and curi mentioned it in one of his newer podcasts (maybe the *sense of life* one). I didn't find anything when searching tho.
> Are you asking if we think this is low error rate?
I don't think I explicitly have asked that, but I am interested in criticisms / conjectures of errors.
I guess I sort of expect that if I claim something is #low-error-rate and someone on FI disagrees then they'd say so. That's not a safe assumption/expectation though. Also, just b/c ppl don't spot errors at the time doesn't mean there aren't errors.
> If so, what kinds of errors are you interested in knowing about?
#18596 has some related thoughts I had yesterday.
In general I'm interested in all errors, but I'm *particularly* interested in errors w/ content (like in the idea/explanation itself), and errors in clarity. That said, While I mentioned the goal of #18579 I didn't mention things like what learning I was focusing on particularly.
Like #18582 is still good to know (grammar issue / minor clarity error), but I'm not too worried about that error -- it's not a bottleneck.
I think we should (as like students of philosophy) be explicit about the things we're focusing on and stages that we're at (e.g. learning reports might be a good place for that). Then the errors we should be interested in are more obvious.
- Errors in the foundation (b/c they compound); if they exist it means we've moved on too soon from something, or forgotten to learn something, etc, so these are v. useful to know about.
- Errors in stage 1 that prevent us from doing something at all; these can be hard to spot ourselves.
- Errors in stage 2 (consistency); corrections here can be high-yield, like getting the right technique for doing something. If we had a bad technique we might get consistent but not get fast.
- Meta-errors: stuff like being bad at grammar/writing, they are generally inhibitive and have a lot of reach.
I'm not sure about errors in stage 3; like we should want to know about them, but those errors won't have as much reach, I think, and having gone through stages 1 and 2 means that we're in a pretty good position to spot those errors ourselves and probably have some ideas on how to solve them.
I had this thought after the US election:
The payouts for bets on Trump/Biden winning varied throughout the election, and was >2 for both of them at some point. If you can place any two bets (one for each candidate) where the payout is >2 then you guarantee profit (overheads aside). You should balance the bets on each side too.
Payouts being >2 mean that you get more than $2 back for a bet of $1.
Events with *sufficient variance* (or uncertainty) sound like they should generally work with this strategy.
Okay, here are some comments about content and clarity in #18579. Keep in mind that I’m not skilled at writing essays and I don’t think I could write a good several-paragraph essay.
- It seems like the main idea of the essay is that good self-judgment is important for learning. But learning isn’t mentioned in the first paragraph. I think your first paragraph should have your main idea in it.
- I think a better main idea would be something like “being able to judge your error rate is important for learning”. The essay isn’t really about being able to judge other things about yourself, just about judging when you are likely to make errors.
> One of the most important skills for life is good self-judgement. Having good self-judgement means that you're able to tell when you're prone to making mistakes (and what they are) or when it's safe to be confident in doing something with a low error rate.
- I asked my first question in #18587 because in your second sentence, “means” can be read two different ways. The sentence could mean “having good self-judgment implies that you’re able to tell…” or “having good self-judgment is defined as being able to tell…”
- Re my second question in #18587, what do you think of writing something like “being able to judge where you are in a learning cycle requires good self-judgment“ instead of “Being able to judge your own learning cycle requires good self-judgement”? I think that’s what you’re getting at.
- I don’t think the example of fine motor skills and typing is a good one here. People don’t have to learn fine motor skills in the abstract in order to learn to type fast. People can learn the fine motor skill of fast typing directly.
This seems interesting - epistemic use of category theory?
> Category theory offers a unifying framework for information modeling that can facilitate the translation of knowledge between disciplines.
> Category Theory for the Sciences is intended to create a bridge between the vast array of mathematical concepts used by mathematicians and the models and frameworks of such scientific disciplines as computation, neuroscience, and physics.
I finished *The Choice* finally.
I liked it and recommend it. I found some bits less engaging, but new ideas are introduced all the way through. It's worth reading in full.
I think there are a few things in the book that are misconceptions that have been improved upon. Like in Ch5 there's the idea that the belief in the inevitability of conflict prevents ppl from progress. This sounds like it conflicts with BoI's 'problems are inevitable'. I think a better way to put Goldratt's idea is: the belief that the solutions to emergent conflicts are unattainable or nonexistent holds ppl back. These sort of things are not a big deal tho. Goldratt's ideas are quite good and I didn't have much trouble aligning them with CF ideas when those sort of things came up.
> Okay, here are some comments about content and clarity in #18579. Keep in mind that I’m not skilled at writing essays and I don’t think I could write a good several-paragraph essay.
Thanks :). FYI I found I was able to improve relatively quickly with some practice (or at least that's how it feels). I'm not sure what you've tried or how important it is to you. I'm happy to help or give suggestions/feedback and things if you think that would be good.
> - It seems like the main idea of the essay is that good self-judgment is important for learning. But learning isn’t mentioned in the first paragraph. I think your first paragraph should have your main idea in it.
I was initially confused till I went back and checked. The first sentence is "One of the most important skills for life is good self-judgement." I agree that I don't explicitly mention the self-judgement <-> learning connection in the first paragraph, but I think I could modify the first line with the following to fix things: "... important skills for effective learning (and life in general) is good self-judgement."
>> One of the most important skills for life is good self-judgement. Having good self-judgement means that you're able to tell when you're prone to making mistakes (and what they are) or when it's safe to be confident in doing something with a low error rate.
> - I asked my first question in #18587 because in your second sentence, “means” can be read two different ways. The sentence could mean “having good self-judgment implies that you’re able to tell…” or “having good self-judgment is defined as being able to tell…”
Ahh, I think I understand, maybe. I read the sentence just now with two different groupings:
- having X means you're able to tell (when Y or when Z)
- having X means (you're able to tell when Y or when Z)
I'm not sure the 2nd way is grammatically valid b/c I have a "when" on both phrases either side of the "or".
I can sort of see the difference between implies and defined as, but not exactly sure I see it here. I think I might be able to avoid this sort of ambiguity anyway, generally.
other thoughts on this q:
FTR I meant: having X implies you can tell (when Y or when Z).
I don't know if I should use "and" here instead of "or". I used "or" b/c I think it goes with "when". If I used "and" instead, I would replace "when" with "the times when".
The "or" version means approximately "you can tell between these situations" and the "and" version means approximately "you can identify both of these situations". The different is subtle, but maybe it's important here. The "or" version (as I've put it here) sort of sounds like only those two options are possible.
Comments/Qs I'll respond to later:
(I'm a little short on time but wanted to post something)
> - Re my second question in #18587 [...]
> - I think a better main idea would be something like “being able to judge your error rate is important for learning”. [...]
> - I don’t think the example of fine motor skills and typing is a good one here. [...]
Your qs and comments were helpful btw. I don't know when I would have noticed the life/learning thing in the first line for example.
running communities based on ideas - note to self
> In general, if someone knows a mistake you're making, what are the mechanisms for telling you and having someone take responsibility for addressing the matter well and addressing followup points?
> Do you have public writing detailing your ideas which anyone is taking responsibility for the correctness of? People at Less Wrong often say "read the sequences" but none of them take responsibility for addressing issues with the sequences, including answering questions or publishing fixes if there are problems.
When you start or run a community based on ideas you need to take responsibility for issues with those ideas. You shouldn't hand over the reins to someone if they can't fulfill that role. For an enduring tradition to be created around ideas there needs to be a stalwart to take responsibility. Progress and persistence is hard without that. If you hand over to someone who falls short then the tradition can die, or at best will have a discontinuity (something you can't bank on). Examples: Karl Popper didn't hand over well (AFAIK) and there was a discontinuity ending with David Deutsch, Ayn Rand handed over to (I think) Binswanger and Peikoff and they didn't rise to a necessary standard. Both Popper and Rand did a lot while they were alive; enough to give their ideas a good chance of surviving. Maybe they could have done better? It's hard to say b/c where were the ppl who understood greatness enough to support Rand, work with her, carry on the tradition, and ultimately grow to exceed her? It seems selling millions of books didn't really help with *that* too much. You should make sure that the ideas you promote are written down and endure. You can't rely on suitable ppl finding you during your lifetime.
What does it mean to choose to be a heroic achiever?
Philosophy who needs it. CH 2 or 3 (CH 3 in audible book). 26:50 / 35:30
I'm open to any discussions or suggestions on this topic.
#18607 one part is not being second handed
>>> One of the most important skills for life is good self-judgement. Having good self-judgement means that you're able to tell when you're prone to making mistakes (and what they are) or when it's safe to be confident in doing something with a low error rate.
>> - I asked my first question in #18587 because in your second sentence, “means” can be read two different ways. The sentence could mean “having good self-judgment implies that you’re able to tell…” or “having good self-judgment is defined as being able to tell…”
> Ahh, I think I understand, maybe. I read the sentence just now with two different groupings:
> - having X means you're able to tell (when Y or when Z)
> - having X means (you're able to tell when Y or when Z)
> I'm not sure the 2nd way is grammatically valid b/c I have a "when" on both phrases either side of the "or".
I don't see the difference between these two groupings. I'm not saying there isn't one, just that I don't get it. That's not what I was getting at.
> I can sort of see the difference between implies and defined as, but not exactly sure I see it here. I think I might be able to avoid this sort of ambiguity anyway, generally.
> other thoughts on this q:
> FTR I meant: having X implies you can tell (when Y or when Z).
A way to reword it to make the implication clear is "If you have good self-judgment, then you'll be able to tell when..."
> I don't know if I should use "and" here instead of "or". I used "or" b/c I think it goes with "when". If I used "and" instead, I would replace "when" with "the times when".
I did not see this point at first, but now I do.
> The "or" version means approximately "you can tell between these situations" and the "and" version means approximately "you can identify both of these situations". The different is subtle, but maybe it's important here. The "or" version (as I've put it here) sort of sounds like only those two options are possible.
I've learned from FI to see subtle differences in wording as important. Which one did you mean to say? Do you want to say that only those two options are possible?
A possible "or" rewording that's wordy but more clear: "... you're able to tell if you're in a situation where you're prone to making mistakes or in one where you can be confident that you'll have a low error rate."
A possible "and" rewording: "... you can identify both where you're prone to making mistakes and where you're unlikely to make mistakes."
>> I don't know if I should use "and" here instead of "or". I used "or" b/c I think it goes with "when". If I used "and" instead, I would replace "when" with "the times when".
> I did not see this point at first, but now I do.
>> The "or" version means approximately "you can tell between these situations" and the "and" version means approximately "you can identify both of these situations". The different is subtle, but maybe it's important here. The "or" version (as I've put it here) sort of sounds like only those two options are possible.
> I've learned from FI to see subtle differences in wording as important. Which one did you mean to say? Do you want to say that only those two options are possible?
I'm not sure how much difference there practically is between the two in this case -- I don't think it's particularly significant either way. That said:
I wanted to say that those states are mutually exclusive and cover the full range of situations. Good self-judgement can reliably tell which state you're in for a given situation.
So I meant the *or* version. You're either prone to making mistakes, or you're not and can be confident.
And yes, I did want to say those two options are the only options possible.
Choices Matter (reflection)
I've thought for a long time that choices are epistemically significant; even before I had a good idea of what 'epistemically' means. One of the first ways I remember thinking about this was wrt determinism and free-will. My idea was roughly: if a person *chooses* to believe in (and thus have) free-will, then they do have it, and if they choose not to, then they don't. I don't think that's strictly correct, but I do think there's an essence of truth insofar as the believe that your problems are soluble is required to seek solutions. If you don't seek solutions to your problems then your life will be ruled by static-memes and other things. Those ideas take away your control and autonomy over your life (at least in particular partial ways).
I think Rand agrees that choices are epistemically significant in some way. From *Philosophy: Who Needs It* (p 45, kindle edition):
> A man does not have to be a worthless scoundrel, but so long as he chooses to be, he *is* a worthless scoundrel and must be treated accordingly; to treat him otherwise is to contradict a *fact*. A man does not have to be a heroic achiever; but so long as he chooses to be, he *is* a heroic achiever and must be treated accordingly; to treat him otherwise is to contradict a *fact*.
I'm pleased that there seems to be some convergence between what Rand has said and my draft 'Why I Live' post (#18534), esp considering I wrote my draft before starting *Philosophy*. I think that's probably due largely to learning from curi and consuming his content. (e.g. repeatedly returning to think on the topic 'helping the best ppl or helping the masses'.)
Note: I'm unsure about my honesty with this next paragraph. I didn't want to cut it, though, in case anyone has some criticisms of it, or has suggestions of things to consider when one is self-doubtful in this way. I'm using a blockquote to signal that it's different from the rest of the post.
> I've repeatedly thought about whether and how choices matter, and one reason for that is my enduring dissatisfaction with how I've been treated (particularly wrt academics or 'intelligence'). I've largely been treated well, and often preferentially, going back almost as far as I can remember. My current explanation for that dissatisfaction is that I think my success has had more to do with my choices than innate ability. I don't know how early that started, though. I have an example about explicitly choosing to change my attitude and approach to an aspect of life from when I was 13, so I think I must have had some important ideas before that.
I still think choices matter, and also that the choice to believe *choices matter* is epistemically significant.
There's a deluge of bad ideas waiting to flood one's mind if one doesn't take one's choices seriously.
convergence in some of Rand's ideas and rational/static memes
> Kant originated the technique required to sell irrational notions to the men of a skeptical, cynical age who have formally rejected mysticism without grasping the rudiments of rationality. The technique is as follows: if you want to propagate an outrageously evil idea (based on traditionally accepted doctrines), your conclusion must be brazenly clear, but your proof unintelligible. **Your proof must be so tangled a mess that it will paralyze a reader’s critical faculty**—a mess of evasions, equivocations, …
Philosophy (p. 140). Penguin Publishing Group. Kindle Edition. (Emphasis mine)
This converges w/ the definition of anti-rational/static memes:
> Static (aka anti-rational) memes *disable the holder's creativity* to prevent criticism of themselves. They are not adapted to be useful, but block effective thinking about that.
Can collaborative writing be used as a good learning tool and error correction process?
I was reading the FI thread "Reading Until the First Error" and start to consider writing a post on the topic. Then I wondered what would happen if a few FI ppl chose a common topic and wrote collaboratively on that topic. Would a better post be produced than if any one of those ppl attempted it individually?
It's unlikely the final post would be near identical to any individual author's attempt unless that author were a lot more knowledgeable on the topic.
If none of then authors are an outlier like that, and if the authors have paths forward, then the collaborative post should be better than any individual attempt. Also, the process of resolving disagreements (done to completion) should mean the authors sort of 'sync up' all their relevant knowledge, which would make writing it a good learning exercise, too.
I'm not sure how this would work in practice, but I'd be interested to find out if anyone else is interested in doing something like this (I am).
There are some possible issues, like if there's a significant mismatch between the authors that could mean the process is inefficient. Or if the topic is too advanced the process could fail but hide sources of error. Topic choice is pretty important, then.
I am interested in trying this.
It sounds difficult. The co-authors would need to come to agreement on topic, on what to say about the topic, and on the writing. I like the idea of talking with other people about exactly what to say and how to say it.
Do you know of examples of competition between static memes?
I'm curious about any examples of competition between static memes, particularly b/c I want to know how they interact.
> Static (aka anti-rational) memes *disable the holder's creativity* to prevent criticism of themselves. They are *not adapted to be useful*, but block effective thinking about that. Their focus is on making the host unable to reject the meme.
curi explicitly qualifies the criticism that's prevented with "of themselves", which is pretty important for static memes if they're competing. It's an advantage if a static meme can avoid preventing (or even encourage) criticism of a competing static meme without compromising the suppression of criticism aimed at it.
I think that's a easy-to-miss point about static memes and maybe a good reason to use the term `static meme` over `anti-rational meme`.
Confession as a static meme
I think the idea of something being confessional might be a static meme. I've never done catholic-style confession but I have had some experiences I'd describe as confessional. I think it's a way to avoid completely dealing with problems. Maybe it's not a full static meme in it's own right (static companion maybe?) but I'd be surprised if it were unrelated.
It's different to honesty, too. You can have an experience that's both confessional and dishonest.
I've thought about this a bit b/c I think I went through a pseudo-confessional thing early on in tutoring. More generally, I think it might be easy for ppl to mistake some of the Oist mindset/integrity/secondhandedness stuff in a way that produces ~confessional behaviour/experiences.
Hooking - Learning Technique
Hooking is a useful technique for learning.
The name is taken from programming; a "hook" is the relationship and entry point for new functionality. That functionality is added after the original software is written (usually by a different programmer).
Hooks are common in programming. In user interface frameworks like React or Vue, hooks to lifecycle events can be made. Those hooks let the programmer add code that runs before or after meaningful things happen, like before the page is loaded, or after all the assets (like images) have been displayed on the screen. When using Git, hooks like "prepare-commit-msg" and "post-update" allow programmers to run scripts before and after some of the steps in Git's procedures.
In programming, the hooks that are chosen are at important breakpoints. Roughly, they are the last and first moments around a significant event. There are guarantees about things that have happened and things that will (or might) happen -- something possible only because significant breakpoints were chosen.
When learning, we can do something similar, where we can introduce ideas based on noticing some event. Adding hooks to certain actions, thoughts, etc allow us to introduce ideas at that specific point, without the overhead of trying to keep the idea in mind.
One example is forgetting to use the word 'that'. Maybe you notice [that] you sometimes omit 'that' and you think that your writing is less clear because of it. If you choose a breakpoint like 'when proofreading I needed to reread a sentence', you can hook the idea 'check for missing implied words'. You're not constantly looking for implied words, but you are ready to add that behaviour when you hit that breakpoint.
The words 'breakpoint' has a meaning in both programming and CR, and I want to clarify that I'm using it in the CR sense. It aligns fairly intuitively with the programming sense, but with programming you act on a breakpoint differently (e.g. debugging via stepping through code and looking at memory). That idea is not specific enough.
In CR, a breakpoint is a boundary on a spectrum (or pseudo-spectrum) which allows you to make yes/no judgements about IGCs and to effectively deal with continuous/unbounded data.
With hooking, the spectrum* that these breakpoints segment* is your thoughts and actions, the idea is the relevant methods of feedback and error correction, and the goal is to avoid the relevant error and to continue avoiding it.
*: 'Spectrum' and 'segment' are (I think) specific to 1d shapes. You'd probably need more than one dimension to represent all possible thoughts and actions. However, that distinction isn't really relevant here; the principle is the same.
It's usually easy for programmers to make good guesses about which hooks are useful (and thus good guesses for which breakpoints are useful). The important thing is that the programmer has good explanations for why hooking here in this context is worthwhile. Sometimes there are explanations with reach, like why hooks between initialization and runtime logic are useful.
It's harder to tell where hooks should go (where to put breakpoints) when it comes to learning. To make good judgements about where hooks go requires that you have a good explanation. That, in turn, requires you understand how to detect, correct, and ultimately avoid those errors. This means hooking is hard to use well when you're just starting to learn a topic. to judge that you need enough knowledge about both learning generally and the topic that you're studying. This means it's good for stage 2 learning (consistency).
Making a bad judgement about where hooks should go can have high cost. One reason to use hooks is that you want to conditionally introduce behaviour with a high overhead. That behaviour doesn't need to be high overhead on its own; maybe it's only high overhead when you have 3 other ideas in mind that you're focusing on. If you're triggering a badly placed hook (at a bad breakpoint) then you will often have the overhead of considering error correction that doesn't actually help. If you don't realise that the hook is badly placed quickly then the cumulative cost can hold your learning back.
#18656 @Anne, I'm thinking about this (like how to do it, how to choose a topic, where to discuss stuff, etc).
I was a bit stuck on thinking about topics. We'd need to have a topic where we didn't have a serious knowledge gap -- otherwise it's more like one person writing and one person editing. That sounds like it's (generally) a hard thing to find. (Luckily?) I think we both have a similar level of knowledge about collaborative writing.
considering a project: a series of posts on learning.
I'm considering a new project: writing a series of blog posts about learning. One goal is that I could, if I wanted, turn that series into a book later (if it was good enough, substantial enough, etc). Making a profit (e.g. by selling a book-book) is not a goal.
If you didn't see https://curi.us/2390#8 btw, I mention the idea of buying a topic thread on curi.us to organise discussion about this.
> I am interested in trying this.
Cool. I don't really mind how long we take as long as we get to some form of completion. ~failure is okay if we do a proper postmortem.
> It sounds difficult.
Somewhat yeah. I can think of some issues we might run in to. Even things like synchronisation (like one person being a bit more inspired and doing a lot) could become an issue. That's probably not what we should worry about tho.
> The co-authors would need to come to agreement on topic, on what to say about the topic, and on the writing.
Sounds like pre-plan, plan, and execution.
I don't mind the idea of doing a post on collaborative writing. Maybe that's not a good idea for a first go at it though. Do you have any topics in mind?
I've been quite interested in learning lately, and I'm planning to write more about that. It's hard to tell if we'd have similar levels of knowledge on relevant topics there, though.
I'm not actually sure that we need similar levels of knowledge. One reason I thought that is a thought experiment I considered. e.g. "What would the end result be like if curi and I tried to write a post? Would it be different to a post that curi would write alone? Maybe, but would the differences be significant? Probably not."
But there's another case, too, which is where two ppl each have more knowledge about different, particular (sub-)topics that are relevant for the post. Like I think you've read a lot more than I have about TCS, and I have done a lot more programming. What would it be like if we tried to write a post about teaching your kid to program? Would we even be able to write a good article? IDK yet.
> I like the idea of talking with other people about exactly what to say and how to say it.
Me too. I am going to try to pay attention to my mindset in those discussions; I'd like to know how it differs from my typical mindset when writing a post or having a discussion.
#18679 Note: forgot to add my name to this post.
My normal procedure for posting here, based on some of the things Alisa wrote on this topic, is: fill in topic if relevant, fill in name, start writing, if the post get's too long then copy to a text editor and copy back when done.
The problem with #18679 was that I didn't do my normal procedure. I clicked 'reply' on #18656 first, then decided I wanted to quote instead so clicked quote. After that I just started writing. I ended up copying that to a text editor and copying back once I was done. Then I clicked 'post message' like my normal procedure.
If I introduced a 'check fields' step before submitting I could have avoided this mistake.
I was thinking about my "Why I Live" draft and other titles that would fit the content better without dishonesty around my mismatch with how I want to live and how I do live.
One title I considered was "Why Live?" and noted that my answer to that q would be very different to the content in "Why I Live". But I think questions like "why live?" are ones that a philosopher should have a good, *considered* answer to. (Another question a philosopher should have an answer to: what are the questions a philosopher should have an answer to?)
This reminded me of https://direct.curi.us/2093-a-discussion-of-steven-pinkers-enlightenment-now-the-case-for-reason-science-humanism-and-progress
I didn't realise the relevant section in *Enlightenment Now* is v. similar until checking just now.
The following quotes are in the PDF attached to the linked blog post, tho I am copying them here too.
The extract from *enlightenment now*:
> But the most arresting question I have ever fielded followed a talk in which I explained the commonplace among scientists that mental life consists of patterns of activity in the tissues of the brain. A student in the audience raised her hand and asked me:
> “Why should I live?”
> The student’s ingenuous tone made it clear that she was neither suicidal nor sarcastic but genuinely curious about how to find meaning and purpose if traditional religious beliefs about an immortal soul are undermined by our best science. My policy is that there is no such thing as a stupid question, and to the surprise of the student, the audience, and most of all myself, I mustered a reasonably creditable answer. What I recall saying—embellished, to be sure, by the distortions of memory and *l’esprit de l’escalier*, the wit of the staircase—went something like this:
>> In the very act of asking that question, you are seeking *reasons* for your convictions, and so you are committed to reason as the means to discover and justify what is important to you. And there are so many reasons to live!
>> As a sentient being, you have the potential to *flourish*. You can refine your faculty of reason itself by learning and debating. You can seek explanations of the natural world through science, and insight into the human condition through the arts and humanities. You can make the most of your capacity for pleasure and satisfaction, which allowed your ancestors to thrive and thereby allowed you to exist. You can appreciate the beauty and richness of the natural and cultural world. As the heir to billions of years of life perpetuating itself, you can perpetuate life in turn. You have been endowed with a sense of *sympathy*—the ability to like, love, respect, help, and show kindness—and you can enjoy the gift of mutual benevolence with friends, family, and colleagues.
>> And because reason tells you that none of this is particular to *you*, you have the responsibility to provide to others what you expect for yourself. You can foster the welfare of other sentient beings by enhancing life, health, knowledge, freedom, abundance, safety, beauty, and peace. History shows that when we sympathize with others and apply our ingenuity to improving the human condition, we can make progress in doing so, and you can help to continue that progress.
> Pinker gives AD HOC answers to the most basic moral philosophy questions, b/c he is no expert on the matter, but a public faker. he admits this to open his book.
I was part of the conversation that comment is from back in 2018, and I'm surprised by how little I remembered about Pinker's writing. It is *full* of social dynamics. (Under 'social dynamics' I'm counting the dishonesty, fancyness, impressive words, concept-dropping (staircase wit in french), kantian flavour, etc.)
Aside: studying curi's *Grammar and Analyzing Text* course on gumroad + analyzing lies + puppet strings content is how I learned to see this stuff; covered in & around tutoring max #28 particularly.
I did remember that curi pointed out it was an *ad hoc* answer, though.
I don't want to be like Pinker. His answer is fancy, impressi-confusing*, and not considered. I want my answer to be "bold and simple" and something I understand *really well*. I don't need to understand the topic really well before I start, I just need to keep things in a draft state and accept criticism till I get there.
*: IDK if 'impressi-confusing' is a good word or not, but no word comes to mind (besides 'kantian' maybe) for writing that *projects* that it's well thought out but is actually hard to parse and understand properly. 'Academese' is similar but a bit different.
> I didn't realise the relevant section in *Enlightenment Now* is v. similar until checking just now.
I misspoke here. I meant: I didn't realise the relevant question from this section ("why should I live?") is v. similar to the title I was thinking about ("why live?").
I was focusing a bit on quoting text from curi's pdf and pinker's ebook at that time, so I think I just changed context too much while writing and dropped some words. I sometimes switch context like that in the middle of a sentence; that will definitely increase my error rate unless I go back and do editing later (which is higher effort than doing it well the first time for simple stuff like this). I could either finish the sentence then go do the thing (like copying quotes) or when I return to a half-done sentence I could re-write the entire sentence. I think the second option might be a bit better for the end result. That said I should remember to think about the overhead of switching like that sometime and check it's not like much higher than I estimate.
collaborative writing project
I think it would be fine to do it on your site if you are willing. Or we could do it by email or Discord or something and then post the whole thing somewhere. I don’t want to pay to put it on curi’s blog.
I don’t have topics in mind. I suggest, since this is an experiment in collaborative writing, that we pick a topic that’s easy and not try to say too much about it this time. Then we could focus better on the writing and the collaboration.
I too am interested in learning. I’m also interested in your other recent topic of why to live. So maybe a smaller topic in one of those areas would work.
About TCS, most of the reading I did about it was on the TCS list, which was of variable quality, and it was around 20 years ago and I don’t remember it much. My memory is that it was mostly about everyday parenting issues and not about learning.
Your original post about this (#18630) said “a few FI people”. Is anyone else interested?
> I don’t want to pay to put it on curi’s blog.
I'm curious about this. Why do you say that?
I wasn't expecting to share the cost btw. I'm not sure if that was clear or not. If you also didn't think we were sharing the cost, though, then wouldn't it make sense to think I would pay for the whole thing? Qs that come to mind: do you disagree with *me* paying for it? or maybe with having the discussion in a venue that cost money? would it make a difference if curi created a suitable topic without being paid?
What do you think?
> we pick a topic that’s easy and not try to say too much about it this time.
I think this is a good idea; I was ready to consider topics that I would consider writing about alone. The problem with that is those topics are *already* near the edge of my limit. Since I don't have much practice doing collaborative stuff, the only safe thing to presume is that a collaborative effort is also near the edge of my limit. Doing a complex topic, then, means I/we would risk overreaching or at least making EC harder.
When I read this idea earlier today I immediately thought about my writing practice during the tutorials. At the time I was a bit reluctant to write about shoes and swimming and things, but it was really valuable b/c it meant a bunch of issues got teased out in a manageable way.
There are a ~dozen left over writing topics I didn't do in some of my tutorial nodes. Do any of these seem like good topics to you or give you ideas for topics? They're at the bottom of: https://xertrov.github.io/fi/notes/2020-08-12/#previous-possible-exercises---not-done-was-previous-homework
> Your original post about this (#18630) said “a few FI people”. Is anyone else interested?
I'm not against more ppl getting involved if they'd like to (esp if it turns out to be valuable), tho I do think it'll be easier with fewer people to start with.
collaborative writing project
It's fine if you want to pay to put the collaborate writing discussion somewhere. I just don't want to pay anything because there are free places we could put it that seem fine to me.
Some of the topics you list would require substantial research for me (maybe not for you) and some wouldn't. All would require some thought about what kinds of things to include in the writing.
I think we should get our working space set up first and then start talking about topics and how we'll go about collaborating
I've been checking this thread once a day. I'll start checking it more often.
re: collaborative writing project
> I just don't want to pay anything because there are free places we could put it that seem fine to me.
Oh yeah ofc.
> I think it would be fine to do it on your site if you are willing
I don't mind, though I'm developing a new site ATM that's more like curi.us in terms of functions/features. If we use my site I'd like to avoid doing too much before I swap over (which helps avoid migration concerns). The new site is not a static site like my current sites are; having a static site makes doing comments more difficult.
> I've been checking this thread once a day. I'll start checking it more often.
Cool. Keep in mind our TZ differences (i'm +11 atm; otherwise +10); I think we're about 8 hours apart. I'll also set the title as I did here for related comments. I'm busy the next 24 hrs or so.
I asked patio11 a question (very soon after he'd posted the tweet) and he replied: https://twitter.com/XertroV/status/1328963125262102531
> [... snip first few tweets in thread ...]
> As somebody who routinely has meetings at 7 AM minutes after waking due to the unending tyranny that is time zones, I would toggle a "Edit my face to look healthy and rested" setting in Zoom in a hot second, if for no other reason than to have less inquiries about health status.
> Why don't you turn your webcam off instead?
> Tough for that Schelling point (everybody on camera or nobody on camera) and important, for interpersonal and organizational reasons, that I be seen (literally and figuratively) as a full participant in meetings affecting me/my projects/etc.
I don't buy the Schelling point thing but mb that's just a culture/perspective difference.
> Makes sense. Particularly for interpersonal/org stuff and senior/leadership/etc roles.
> I think *most* ppl should just turn off their webcams, though (and that the cultural *expectations* around keeping webcams on are bad).
(thread ends there)
re: collaborative writing project
@Anne, I thought we might be getting stuck and listing current potential problems seems like a good way to avoid that (or start at least).
- A place for a dedicated discussion. I sort of expect it to be high volume. Do you also expect that?
- Topic choice -- there's plenty of stuff that comes to mind about what *I* personally want to write about, but I get stuck suggesting something. Maybe we could each brainstorm some topics that are simple that we don't know too much about. Then we could swap lists/trees and pick ones that sounded good from each other's list? That should work as long as we end up with 1 or more topics.
- We might have an issue with latency and work volume? Like if we have trouble being synchronous (there aren't many good overlapping times I suspect) then slow / low volume messages means writing will take longer. Smaller and easier topics will help get to a result faster and move on to a new topic (repeating the cycle as many times as we like).
- Maybe the potential size/difficulty is a bit intimidating? I want to do it because I think I can learn something about learning and/or discussing. I also want to make progress in both those things regardless, so I'm slightly worried that maybe it's inefficient. An easier topic will help there. Also, nothing bad will happen if we pick something too easy.
Some discord msgs that are relevant to learning philosophy and problems joining/doing FI:
In highlights 4, curi talks a bit about learning philosophy and helping the world (particularly w/ philosophy). At 11:15 he says (my words) that mistakes in philosophy impact your philosophy progress a lot and mean your efforts aren't very productive -- I think there's an implication that this is not the case for most/all other things (which makes sense to me).
I think these 2 reasons contribute:
- philosophy is not very mechanical. it's not like getting better at replicating something straight forward like practicing speedrunning tech.
- philosophy has like high inertia or poor/slow feedback. stuff like speedrunning / maths / coding all have very fast feedback cycles: once you 'finish' you learn whether you were successful or not fairly quickly. philosophy is like the other end of the spectrum, though, and errors can go a really long time without being noticed.
#18716 link to video: https://youtu.be/2xUcSFh4IkU?t=675
postmortem on onerednail.com
I've started writing a postmortem on my onerednail.com thing.
The problem seems fairly simple now:
on the site I say something like: ideas matter, and that wearing a red nail polish on a particular finger is the symbol I'd chosen for that. particularly I say:
> To be a *responsible* thinker requires accepting this, because without doing so you would deny yourself the most powerful method of *error correction* we have. We must *live the consequences* of our ideas and morality, strive for their betterment, and understand the consequences of the alternative.
the problem is that this does not line up with behaviour. if I value philosophy, why have I not been writing and learning and doing any of it? I *thought* I was, but it was superficial. what I *should* be doing (living the consequence of my ideas) is dedicating time and effort and things to actually doing philosophy.
> The red fingernail is -- for me -- a dedication to those ideas and values.
Not anymore. Now it's more of a reminder about *faking* that dedication.
> It is a declaration of responsibility, and a desire to accept it.
It was an *abdication* of responsibility, and a desire to *believe* I accepted it, even if that was because I was *fooling myself*.
(Minor grammar mistake: I say the red nail is a desire, where I should have said "a symbol of my desire" or something like that.)
*Was it bad to do it though?*
First, I don't think it was *wrong* to do. Like I didn't think it went against any of my principles, it was at least the *claim* that philosophy was important - which is good - and it didn't hurt anyone.
The idea has serious problems, so it's bad in that sense.
However, it did succeed at some things. This one particularly:
> It is a reminder of the importance of philosophy and epistemology in daily life, [...]
It did do this. I paused for thought and considered things more deeply than I would have otherwise. It was a direct part of some significant events b/c *I was bothered when I thought my actions didn't line up with the symbol.* There would have been some significant thoughts I woudln't have had, and actions I wouldn't have done, if I decided not to do it in the first place.
For that reason *I don't regret the mistake, and I would be happy to make the same mistake if I didn't know better.*
This was meant to be a gist-icle (short/summary-ish) but it's longer than I anticipated. I'm going to post to FI about it b/c I think it's important enough. I'm not sure if I'll think of more to add, though.
Rand and the gold std?
In *Philosophy* Rand says:
> Money is the tool of men who have reached a high level of productivity and a long-range control over their lives. Money is not merely a tool of exchange: much more importantly, it is a tool of saving, which permits delayed consumption and buys time for future production. To fulfill this requirement, money has to be some material commodity which is imperishable, rare, homogeneous, easily stored, not subject to wide fluctuations of value, and always in demand among those you trade with. This leads you to the decision to use gold as money. Gold money is a tangible value in itself and a token of wealth actually produced. When you accept a gold coin in payment for your goods, you actually deliver the goods to the buyer; the transaction is as safe as simple barter. When you store your savings in the form of gold coins, they represent the goods which you have actually produced and which have gone to buy time for other producers, who will keep the productive process going, so that you’ll be able to trade your coins for goods any time you wish.
Philosophy (pp. 153-154). Penguin Publishing Group. Kindle Edition.
I think she's mistaken about using gold as money in some way. Particularly:
> Gold money is [...] a token of wealth actually produced.
I disagree. It's production and value aren't connected like I think she's implying they are / should be. Gold could do as good a job at being regardless of whether it's dug out of the ground or given to us by aliens (provided it gets distributed at the same rate). Maybe I misunderstand her?
Anyway I stopped at this point b/c I wanted to write more thoughts, but I think it's better to have lower standards for this sort of 'getting stuck' and just post something basic. Noting it publicly is enough to get me unstuck (keep reading) and I can figure out this misunderstanding in parallel.
She also says
> To fulfill this requirement, money has to be some material commodity which is ..., rare, ...
I think "rare" is the wrong word. "Scarce" is better. I don't think this represents a major error on her (or my) part tho.
idea for learning post: Gem of Seeing
Gem of Seeing is a D&D item that gives the user Truesight. That let's the character see things like secret doors, spirits, magical items, etc. The point is it let's you *see things that are really there, but you otherwise can't see*.
* say you're making major mistakes about learning
* particularly that you don't see lots of errors your making -- they're really happening but you can't see them.
* what would life be like with a gem of seeing (for learning) vs without?
* if you *didn't* have a gem of seeing and didn't know you were making mistakes, how could you tell?
* you'd *believe* you *weren't* making the mistakes. reality tells you that, right?
* but you would be, and they'd still be a constraint.
* this is the case for ~everyone.
* now imagine you find a gem of seeing, and suddenly you can see all these mistakes your making.
* you can see other ppls mistakes too, but they don't believe you if you tell them.
* how much of a difference would that make to your life?
* could you afford to ignore the possibility?
Being bad at learning (and evasive/dishonest about it, too) is like never having a gem of seeing.
Being really good at learning, and thus philosophy, is like having the gem of seeing. In reality your 'Truesight' skill builds up incrementally though; it's not all in one go like the gem.
#learning (i'm tagging stuff to make it easier to find later)
re: collaborative writing project
>@Anne, I thought we might be getting stuck and listing current potential problems seems like a good way to avoid that (or start at least).
> - A place for a dedicated discussion. I sort of expect it to be high volume. Do you also expect that?
High volume is okay. Maybe somehow that will push me to think and write faster.
You're working on a place for a dedicated discussion, right?
> - Topic choice -- there's plenty of stuff that comes to mind about what *I* personally want to write about, but I get stuck suggesting something. Maybe we could each brainstorm some topics that are simple that we don't know too much about. Then we could swap lists/trees and pick ones that sounded good from each other's list? That should work as long as we end up with 1 or more topics.
Okay, I'm willing to do this. But see below about topics.
> - We might have an issue with latency and work volume? Like if we have trouble being synchronous (there aren't many good overlapping times I suspect) then slow / low volume messages means writing will take longer. Smaller and easier topics will help get to a result faster and move on to a new topic (repeating the cycle as many times as we like).
Actually, we seem to have a good chunk of overlapping time. You're still posting things now as I write this and I've been awake for nine hours. Should we set up times to work together? Should we do some voice chat as well as text?
> - Maybe the potential size/difficulty is a bit intimidating? I want to do it because I think I can learn something about learning and/or discussing. I also want to make progress in both those things regardless, so I'm slightly worried that maybe it's inefficient. An easier topic will help there. Also, nothing bad will happen if we pick something too easy.
I too am hoping to learn something about learning and/or discussing, and also about writing.
I think we should aim first for something too easy. Maybe we should first look for easy topics that we already have something to say about, not topics we'd have to research.
re: collaborative writing project
> Actually, we seem to have a good chunk of overlapping time. You're still posting things now as I write this and I've been awake for nine hours.
This isn't necessarily typical. It's 7.42 am for me atm. I'm usually waking up about now.
> Should we set up times to work together? Should we do some voice chat as well as text?
Yeah, that sounds like a decent way to start (can do set up stuff over discord).
> I think we should aim first for something too easy. Maybe we should first look for easy topics that we already have something to say about, not topics we'd have to research.
This sounds like a good plan.
I've brainstormed some topics. Some of them require research and some don't.
Maybe we should hold off on doing more until we have our dedicated place for the project.
#18724 Yup. I need a bit of time beforehand anyway. I'll brainstorm some stuff too.
your life depends on getting unstuck
https://youtu.be/2xUcSFh4IkU?t=1194 - curi Philosophy Stream Highlights #4
> And a lot of times the first time someone gets majorly stuck, they stay stuck forever and it starts changing them.
There are a few ~common sense type analogies that come to mind. Like with jogging there's the idea of 'don't slow down because you won't be able to start up again'. Both jogging and learning are endurance activities. You have to maintain like a consistent minimum to keep going, or if you slow down too much you have to try way harder to get going again. A good attitude can make that easier but it's never free.
The foundation of a good learning strategy must include staying unstuck.
Your reaction time to getting stuck is like an overhead on learning. It's ~randomly distributed so on average it's like a *constant* overhead. A constant overhead *you can remove*.
You can't predict where you'll get stuck (or when or how often), but you *can* get better at staying unstuck. If you don't get better, then you get ~random periods of being stuck with ~random durations. If you just wait for them to resolve themselves your rate of progress will tend to 0.
I once summarised part of BoI as "the alternative to problem solving is death". The alternative to staying unstuck is a wasted life.
Everyone can get better at being unstuck. It's not like a super abstract idea that's out of reach. It's easy to start. Easier than you think, and there's always a new option to try. The hard part is being honest enough with yourself to start; *believing* you can take the next step, rather than actually taking it. (Don't know where to start? Don't know any techniques? Don't know if you're stuck? That's your starting point. Ask questions. Seek answers. That's part of why FI exists, so you have a place to do that.)
If you're stuck, give yourself the chance to get unstuck. You can't rely on other people looking out for you. Sometimes you can't even rely on *you* to look out you. Take matters into your own hands. Your life depends on it. You depend on it. Get unstuck.
You also have to get good at knowing when you're stuck.
I am having a moment of excitement b/c I wanted to note that
gem of seeing and self-judgement are relevant things I want to link -- **things I wrote**. I feel like I have a very tiny library. But I didn't know what it felt like to have a library before. I'm very happy right now.
#18728 I looked up the definition and happy isn't a strong enough word. Joy is better.
BTW i've been using it but idk if *library* is the best word. I don't know many good options but there's one good word which may be better: *archive*.
I’d argue the benefit you got in the form of a new and good habit I.e. frequently reflecting on your own values during times of choice was worth it, even if you did know better.
errors in *my* thinking vs errors in *their* thinking
I think most ppl have a focus issue when it comes to thinking about errors in discussions/debates.
Errors in what other ppl say are not very interesting. But errors in what *you* say should be v. interesting *to you*. (And similarly, errors in what *I* say are very interesting to *me*.)
If you're debating someone, and they make 10 errors and you make 2 errors: how should you feel about that?
I think you should feel the same regardless of whether they made 0 errors, 10 errors, or 100 errors. You made 2 errors, and that's what you should care about. There's no such thing as "winning" if you're making mistakes; for win-win you want to have had those mistakes corrected before the end of the discussion. Those 2 errors are the only things that *you* are able to improve *without relying on others*. They're the only errors you can guarantee that you can fix. You have to be aware of them too, but after that you can fix them.
Say you have a conversation transcript of you debating someone. You show that to someone and ask for feedback. When you're being given feedback, what should you want the the focus of the feedback to be on?
The bad (but common) way of thinking about this is to want the reviewer to *agree* with you, and say things like 'your opponent was bad' and 'your arguments were good'. A little bit of that is okay, and some discussion of the opponents are okay when it's relevant.
**But!** *The **only** part of the reviewers feedback that will **actually** help you is the reviewer's **criticisms of your arguments**.*
Praise will never help you.
Criticism of the opponent will rarely help you.
The only thing that will consistently and reliably help you become a better thinker, discussion partner, debater, and philosopher is *criticism of your own errors*.
> BTW i've been using it but idk if *library* is the best word. I don't know many good options but there's one good word which may be better: *archive*.
Yeah. I think I was originally a bit mistaken on what you meant by *library of criticism* (that might have been early in tutoring, maybe the yes/no videos?). I thought you were talking more about your blog -- something that *other* people could interact with. Now I think it's more like a thing in your head for checking new ideas against, and the blog is auxiliary.
*Archive* is a decent word.
*Garden* comes to mind as a metaphorical option.
#18740 Computer programming has something kinda like a library of criticism: a *test suite*. Maybe in philosophy it could be called a *test suite for ideas*.
#18741 Mm, thinking down that track: maybe like *criticism suite*, *critical test suite*, *suite of criticism*, *critical unit tests*, *unit crits*
Wrt the *archive* side, maybe like a *cache* would be a fitting description. When you change your mind on something it's akin to cache invalidation.
The idea of an art *exhibition* came to mind too; like it's the stuff that's good enough to be on display.
What's the alternative to making a mistake only once?
Nearly all mistakes we make aren't a big deal as one-off-mistakes. There are exceptions, like if you crash a car (you might kill someone). Maybe doing/learning philosophy, too. Mistakes are inevitable, so for most things making ~zero mistakes long-term isn't practical and might make that thing harder.
If mistakes are going to happen, they have to happen at least once.
I think there's a common intuition that the alternative to making a mistake only once is making the mistake 2 or 3 or 4 times, etc. *This is wrong.*
The real alternative to making a mistake once is *making a mistake infinitely many times*. That's what most people do, and that's what getting stuck is. If you make a mistake a handful of times you might not even notice it, esp. if it's not a bottleneck.
If you haven't made the choice to prioritise a learning attitude, then why would making a mistake the 2nd or 3rd or 4th time be any different to the first? If you have a learning attitude, then each mistake is an opportunity. Each mistake is *potentially* the last time you make it. If you don't have a learning attitude then you will only stop making mistakes by *~luck*.
If you don't have a learning attitude, then a new mistake is potentially the beginning of an *unbounded* series of mistakes. That will mean you have a worse life.
You should want to *avoid this situation at all costs*. Luckily, if you are stuck, you're always at the *beginning* of that unbounded series of mistakes. **You can choose to apply bounds by adopting a learning attitude!**
Everyone is at the beginning of *an* infinity. It's your *choice* whether that is *your* infinity, or your *mistakes'* infinity.
1st order stuck vs 2nd order stuck and structural epistemology
Note: I think this is an *advanced* topic on learning. I mention this for two reasons: 1. it's not something ppl should be worried about early on, and learning the basics is more important; and 2. I'm less sure about this than my other posts on learning so far. I would still be surprised if there were *significant* errors, though. I'm like 8/10 confident on 1st and 2nd order stuck. I'm 9+/10 confident on the structural epistemology parts.
What happens when two people learn the same thing? It's common sense that there will be some differences in what they learn, how well they can apply it, etc. Some of that is due to pre-existing knowledge, but is there more to it?
## can two people ever learn the same thing?
Consider: Alice and Bob want to learn about something particular, and have similar background knowledge. The first two concepts they need to learn are *X* and *Y*. After that, there's a third concept, *Z*, that builds on both of those. There's also concept *W* that's sort of similar to *(X,Y,Z)* but a bit different too.
Let's consider a concrete example: building a computer. Alice and Bob have finished high school and want to build a computer over the christmas break. They will need to learn about the components of a computer so they know what to buy (cpu, motherboard, ram, ssds and/or hdds, the case, etc). They also need to know some super-basics about electronics: how to plug all their components together, what cables they'll need to use or buy, calculating power consumption, etc. Those are concepts *X* and *Y*. Concept *Z* is how all the components work together in a complete computer and which configurations work for particular purposes (e.g. gaming, office work, video editing, streaming, music creation, digital art, etc). They need to know some other things too (like how to install an operating system, and background knowledge) -- we don't need to consider those things for this example.
Once they know *Z* they can choose and buy their components and then put it all together. They're both successful.
As it happens, both of them applied for electrical engineering courses at university, and both are accepted. At the end of their break, they take their computers and their knowledge about computers -- *(X,Y,Z)* -- with them to to university.
Before we continue let's think about what Alice and Bob learned while studying *(X,Y,Z)*. Can we find out if they learnt *the same thing* or not? What does *the same thing* mean in the context of learning?
I think there are two important ways to look at whether they learnt *equivalent* things or not. One of them is about *the tasks they can perform*; and the other is about *the ideas themselves*.
If we're only concerned with the *results* that certain knowledge gives, we are talking about the knowledge's *denotation*. If Alice and Bob can *perform the same tasks* (they get equivalent results with only negligible differences) then we say they have the same *denotational knowledge*.
We can say that Alice and Bob learned the same thing because they both built a computer, and they can both answer the same questions about the configurations that make sense for certain use cases. This is like *a standardized test* that are common in schools. It's a definition of a checklist of inputs (questions) and outputs (answers) that students should repeat. For some types of tests, like text analysis in English, the answers aren't explicitly listed; rather, qualities of good answers are listed (like 'identifies techniques' or 'discusses the interaction of themes', etc). For other tests the answers are explicit (e.g. multiple choice tests); and finally some tests have a mix of both (like maths tests, where the final answer is explicit but the algebra to get there is not).
What if we're concerned with the other option: *the ideas themselves*? How can we compare those?
We can't directly observe ideas. Even if we could see inside Alice and Bob's brains (something they might not like), how would we know what to look for? We can't just ask them either: they can't tell us exactly what their ideas are, and we can't ask them questions on the topic either -- that would basically be like a standardised test. So how do we know if they learnt equivalent things?
Even though we can't *directly* observe ideas, there is a way ideas are used other than to produce results -- *ideas are building blocks for other ideas*. This means that if Bob and Alice learnt *the same thing*, then they should be able to build similar *new* ideas with their *(X,Y,Z)* building blocks.
Alice and Bob will learn a new idea similarly if they have similar building blocks -- if their knowledge has the same *structure*. It's like they have same same lego set of ideas. If their knowledge has *different* structures, then they can't build the same things, like if they had lego sets with different pieces. *Sometimes* they can build the same things, but *not always*. We can say Alice and Bob have the same *structural knowledge* if they can build the same ideas.
Let's consider Alice and Bob learning a basic concept, *W*, about electronics in first yr uni and how it might interact with *(X,Y,Z)*. *W* is similar to *(X,Y,Z)* but also different. Alice and Bob are told that to make electronics you need components and one of a circuit board, or a bread board, or maybe just a mess of wires. They're told about attaching components to each other, and power, data, and ground and things like that. This is concept *W*. Alice and Bob each have a different question for the tutor:
> How do you connect components if they're the wrong shape, or have different wires?
His full idea was something like: computer components connect together using cables or directly using sockets. The wire components of particular cables go with particular connectors only -- they don't go with other connectors. The connectors you need on a cable are the male/female versions that correspond to the connectors on the devices. If a device connects directly, then you can use a cable with one male and and one female end to connect the device somewhere else. If you plug everything together with matching sockets, then it'll all work out.
Bob asked his question b/c his understanding was at the level of emergence of cables and connectors and things you could plug together.
> How can we replace components if some components are out of stock or too expensive?
Alice had a different idea, something like: sockets and connectors are chosen to make sure ppl plug the right things together. Manufacturers choose particular wires and shrouding based on: availability and price, the requirements of the components being connected (i.e. standards like HDMI), what the customer expects, and how the cable will be used (are there lots in a bunch, does it need to go round corners, etc). You can cut up multiple cables and join them together (splicing) to make cables with different combinations of male/female connectors, to change between compatible connectors (an adaptor), or to replace faulty wires -- provided you are combining wiring of high enough quality (excess capacity). Cables are only there to deliver power to components or transmit signals between them.
Alice asked her question b/c her understanding was at the level of emergence of wires and semi-conductors with some economics thrown in.
I hope you can see how their knowledge differs in *structure* even thought they're both able to use it to put together the same computers, diagnose the same problems, know which replacement parts or upgrades to buy, etc.
*Structural knowledge* matters when we want to *build on*, or *change* knowledge. When we want to use it for different things, or apply it to new situations. If the *structure* of *(X,Y,Z)* is different in different people, then they can still have the same *denotational* knowledge, but they will diverge when they learn new things.
Just because some knowledge has the same *denotation* does not mean that it has the same *structure*.
We do know how to record some elements of *structural* knowledge. curi gives some examples in Structural Epistemology Introduction Part 1 and Structural Epistemology Introduction Part 2.
But ideas in the mind are different from ideas that are written down. We don't know how to compare ideas directly. However, we do know some things about people and how ideas are created. Ideas aren't written in to your brain like things are written on paper. Ideas are created through an evolutionary process -- exactly *how* we don't know. You learn something when your brain creates an idea that explains the things you're trying to understand -- or, for simple things, when you can repeat an action or achieve a result. That means your brain needs to combine pre-existing ideas repeatedly (thinking) until it finds an idea that satisfies some success criteria. Your brain can do a lot of auto-criticising; that's when you're *thinking* but it's like *work*, like you're waiting for your unconscious mind to tell you the answer. Sometimes you have an idea that's *nearly* right, only to learn of a criticism later (maybe you came up with it or someone else did). Our brain makes *guesses* and nothing will guarantee any of those guesses are correct, but we *can* know when something is *not* correct if we know a criticism of it.
When two people learn the same thing, they might have the same denotational knowledge (within some scope), but they'll ~never have exactly the same structural knowledge. There will always be some differences.
## 1st order stuck and 2nd order stuck
When you get stuck, there are two ways that can happen.
First, you can *stop learning all together* (this can be specific to a single topic or it can be bigger, too). This is *1st order stuck*. It's the normal kind. It's fixed in the normal ways and normal techniques work. In the example above, Bob was first order stuck, he just needed to learn some new things about circuitry to move on.
You *can* be 1st order stuck because of problems with your *learning method*. You could be doing the wrong type of practice, or not learning from certain media types (e.g. if you hated videos), or overreaching in general.
Second, you can be making a *mistake in the act of knowledge creation*; you can have a *structural* problem with *the results of your method*. This is like whether you're creating highly general knowledge or not. This is different to 1st order stuck b/c *you can still make progress*, and technically *that progress can still be unbounded*. But you will need to do more maintenance of your existing knowledge and it will be more fragile.
This is being *2nd order stuck* and means **the rate of your learning will suffer.** You won't be able to come up with ideas as well as exceptional people, you won't be able to use ideas as well as exceptional people, and most importantly *you won't be able to learn as fast as exceptional people*. It's overhead on your *velocity and acceleration*, not on the distance you can travel.
2nd order stuck isn't as clear cut or pervasive as 1st order stuck typically is. It can vary by topic.
Authors note: I think this is currently the limit of my understanding.
How do you tell if you're 2nd order stuck and on what? How do you learn/practice/fix such a mistake? What are the techniques to get 2nd order unstuck?
Can you turn 2nd order stuck into 1st order stuck? ... maybe? The only way I can think of is to learn about learning, but I'm not sure that's enough.
## Further reading
The above post/essay/thing is the longest (good) FI think I've written I think. It took me 2.5hrs to write. I don't think I was distracted for much of it, if any. I only paused a few times to think of what to say next and those times were for less than 60s (probs less than 20s but I didn't measure them). It's nearly 2000 words long, and I'm pretty happy with how I did. I think it might have more original thinking than usual which I'm excited about (the 1st/2nd order stuck, not structural epistemology).
I was considering not putting in the example when I started. I think it was probably good that I did. That said, it took longer to write because I included the example, and I'm not sure it was necessary or efficient. Either way, I do think it makes things clearer.
project: book on learning - brainstorm of content
I had a go at a first draft for an outline/structure for my project on a series of posts on learning (and maybe a book). If every entry was a post (excluding further reading) then there'd be ~70 posts/chapters/things w/ this structure (I added some while writing this, so more than 70). I imagine many of them would be like 500 words on average, so it'd be 35k words total. That seems do-able, but is much bigger than anything I've taken on before. On the other hand, I have written like 3k+ words on this already (maybe more like 5k) so that's like 15% of the way there without really trying.
I think I could write at least a few hundred words on each of these topics (more on some, less on others).
I think I'd write the book using examples and things from speedrunning. There are a few reasons for this:
- speedrunning is an easy topic to analyse and apply techniques to
- there's already a good amount of speedrunning related knowledge/analysis on FI
- speedrunning is a highly competitive sport where knowledge and practice are both highly valued; people care about techniques and efficiency -- overlapping values
- it's popular, so a book like this could help lots of people -- large audience (even a tiny success in the speedrunning community is big relative to FI)
- seems like those people are starved for content to some extent, and keen to find an edge -- demand
- easy to measure results -- evidence
Popularity, money, fame, etc are not goals for me w/ this project.
My goal is to learn philosophy (and particularly about learning) and establish a track record. There's a minor goal to produce work that is useful for FI ppl.
I'm also considering demonstrating some of the techniques via speedrunning myself. If I can write a book on learning, why can't I use the techniques to achieve something exceptional? Well, my experience speedrunning is minimal but non-zero (that's a risk) and I didn't excel at games growing up so there might be some ground to make up. It also might be a big investment of time, which is not something I think I can necessarily afford; but I will need downtime so doing it during that is an option. I can always do speedrunning as a hobby and collect data as I go along. There's a big risk in focus, too, I don't want to compromise learning philosophy for the sake of speedrunning.
* Gem of Seeing
* Goals and Achievement
* The Alternative
* Learning First
* Attitude - I - Life
* Stuck - I
* Attitude - II - Step by Step
* Life Isn't a Speed Run
* Main Quest
* Side Quests (Indirection)
* Attitude - III - Problems
* Criticism & You
* On Yourself
* On Life
* On Problems
* On Others
* Attitude - IV
* Honesty - I
* Priorities - I
* Your Future
* Freedom & Autonomy
* Bounded vs Unbounded - I
* You Can be a Beginning of Infinity
* What / When / Where
* Learning - What it means
* Learning cycle
* Bounded vs Unbounded - II
* Bottlenecks & Capacity
* Focus - I
* Priorities - II
* Stuck - II
* Honesty - II
* Overreaching - I
* Humility & Doubt
* Questions - I
* Overreaching - II
* Honesty - III
* Priorities - III
* Focus - II
* 1-3 things
* Questions - II
* Excess Capacity
* Self Judgement
* Overreaching - III
* Questions - III
* Sense of Life (?)
* Bounded vs Unbounded - III
* Stuck - III
* Getting Unstuck
* Questions - IV
* Structural Epistemology
* Learning, the Mind, and AGI
* 1st / 2nd Order Stuck
* Questions - V
* Overreaching - IV
* Further Reading & Acknowledgements
* Fallible Ideas & Elliot Temple
I had a few thoughts on a title (I have not focused on it), and I've just been thinking of it as called "Learning". In parallel w/ reading Rand's *Philosophy* I started thinking of titular variations like "Learning: Who Needs It?" or "Learning: You Need It". TBH I don't think that's a great idea, and I think I like just "Learning" more anyway.
I'm not sure about breaking it up into Why, What/Where/When, and How. I think it makes some sense in that it's like: convince the reader this matters, explain the concepts and building blocks, show how to put it all together. The sections/parts can be loose too, like just broad strokes, not hard and fast categories. I think leaving advanced stuff to the end is good, though. It marks it as more optional, too.
I don't think I'd write it in the order that it is here. Some latter bits will be easier, etc. I think I'll need the most time on the "Why" section.
I posted the outline to my site; this is where I'll update it in future. I'll post here if there are major updates. https://xertrov.github.io/fi/posts/2020-11-20-learning-book-content-brainstorming/
I forgot about the lack of list support on curi.us -- check my site for the correct indentations.
how to do FI
This is a bit of a guess at a general method for doing FI. I think it's roughly what I'm doing. there are probably things I've missed. feedback welcome ofc.
First, you have to want to learn and read and write down your thoughts. If that isn't the case then you have a problem with mindset/attitude. You might need to find a decisive reason to want to start writing (I needed this). My reason is establishing a track record. I don't think I can do the things I want to do without getting better at philosophy, and I need a track record for that and to be taken seriously (and to expect to be taken seriously). Writing is now a direct part of my plan (my *method*, not just my *goal*), so I want to do it. There are other pitfalls you can run into at this stage, like if you don't want to learn or read.
- first, write down all your new thoughts with priority. don't expect ppl to read them, you're writing them down for you (to practice writing, to get them clear in your mind, to have a track record, to expose them for criticism, etc). one exception to this is if you have too much to write and can like go and go and go. I don't think that's v common but that's a different situation if you fall into that bucket.
- if you run out of things you want to write down, or don't feel like it at that time, then you should learn by reading new posts or watching curi videos/podcasts or consuming good stuff (mb high concentration like Rand or Goldratt or low concentration like okay stuff on YT). take notes, esp questions. you don't need to take like lots of notes like ppl do at uni. note down important things.
- you should keep up with your discussions as much as you can, and with priority over new discussions and new materials. it's okay if you want to end/postpone a discussion because you think there's something you should learn first, but if that's the case you should say so. if you need to abandon a discussion, then it's better to say that's what you're doing as early as possible rather than to not say that.
- you should try to make as effective use of criticism as possible. as you get better at that you can have much shorter conversations before realising that an error happened and what/how/why/etc. **do postmortems**, you don't have to for everything, but it's good practice to do it for simple stuff and it's really important to do them for large things when you understand the mistake. you want to expose your EC to criticism, too.
- get used to having a content backlog and being decisive about what you want to focus on. get used to organising your time so you can keep commitments around discussions and things. practice being consistent and not evading or abandoning things you start.
> can two people ever learn the same thing?
in particular the part talking about:
> here's a metaphor to help understand the issue: **everyone's mind has its own programming language.**
I read *Anthem* yesterday. It's quite short (like ~1hr to read casually) and I recommend it.
It reminded me a lot of *1984*, esp at the beginning. *Anthem* deals with the philosophy a bit more directly, tho, and is written more simply. (At least from what I can remember of 1984.)
In Australia, 1984 is a text that's sometimes (always?) studied in advanced high school English, and I was a bit surprised yesterday that Anthem wouldn't be included in that module. 1984 spends a lot more time on thoughtcrime / doublethink / controlling thought via language, but the essence is there in Anthem of all of that. IDK why Anthem isn't included in that module, but I hope it's not the common anti-Rand attitude (I suspect it is due to this).
I thought the setting of Anthem was a lot more believable than 1984; like if you're going to destroy ppl's ability to think then it doesn't make much sense to have a highly advanced and somewhat productive society. Though 1984 is set in the near future and Anthem is set like ~hundreds of years in the future. It always annoyed me a bit that 1984 was set like 40 years after Orwell wrote it (1948) but nobody can remember anything from 3 decades ago.
Note that Anthem was published in 1937, well before 1984's publication, and before seeing WWII or its aftermath with the USSR.
Yes Rand's is more realistic. Rand understands how socialism is incompatible science and wealth.
There are other kinda similar books. Everyone mentions *Animal Farm* (good IMO) and *Brave New World* (read long ago, liked it fine at the time) but not *This Perfect Day* (I liked it) or *We* (disliked first few pages, plan to try reading it again but haven't yet).
Different but kinda related is *One Day in the Life of Ivan Denisovich* which is kinda like a really short version of some of *Gulag Archipelago*. It's about the actual USSR instead of a sci-fi dystopia with some inspiration from Russian communism. Rand's *We The Living* is also about the actual USSR and it's very good and gets less attention than it deserves compared to AS and FH (it's not as good as them but it's still a great book, and for Rand's fiction AS/FH get ~all the attention).
I was introduced to this book some years ago, essentially as '1984 but before 1984 was written'. I'm not sure if there are multiple translations, but I'm pretty sure the original was in russian. (googled it: yup, also it was written in 1920-21!)
> Note that Anthem was published in 1937
Yeah, I noticed that and wondered if Orwell had read *Anthem*
> well before 1984's publication, and before seeing WWII or its aftermath with the USSR.
Ppl seem to have this idea of science that glorifies prediction (though they apply it inconsistently, e.g. the explanation of quantum computation from MWI is often ignored). This glorification of prediction seems to be inconsistently applied to literature too.
> Rand understands how socialism is incompatible science and wealth.
This is a big one that ppl don't seem to get. I think most ppl think that progress and wealth are like independent of systems like communism, and they use excuses like 'those ppl just didn't do communism right'.
On the communism-excuse note: everyone in favour of capitalism could argue the same thing -- nobody has done capitalism right either! Like, in the last 100 yrs, when was capitalism done right? It wasn't. So by the socialists' logic: they can't argue against capitalism on the basis of how things are *now, in contemporary 'capitalist' systems,* for the same reason the *do* argue that crits of communism don't apply because ppl didn't do it right. Their logic is flawed, ofc, but I only noticed that contradiction during/after reading *Anthem*.
I liked *Animal Farm* too. Haven't read *Brave New World* or the others you mentioned, though.
PS. I added http://fallibleideas.com/originality to further reading on https://xertrov.github.io/fi/posts/2020-11-20-1st-order-getting-stuck-vs-2nd-order-getting-stuck-and-structural-epistemology/. thanks for that
#18841 Did you read *We*? If so, do you think it's good?
#18843 No. It's only like 2x the length of Anthem, though, so might do that tonight + tomorrow. Will post here if I do.
#18844 I think I was way off on the length thing here, *We* is more like 4x.
I googled for the length and this link mentions ~1hr at 500 WPM. After I posted #18844 I started to wonder how that could be accurate b/c I didn't push my speed while reading it, so the numbers started to feel wrong. This site has it at about 1hr to read, too, but 250 WPM (which sounds better) and has the length at ~15k. *We* is like 62k words.
#18845 just check book lengths yourself. i pasted it into text editor and it's 19k after deleting the text before and after the actual novel part.
#18841 I haven’t read *Animal Farm*, *1984*, or *Brave New World*. I did look at the below linked video summaries of them just some weeks ago. I mostly liked them. I don’t know if the summaries presnented the books well though. Maybe someone who has read the books can comment on the video summeries.
*Brave New World*
#18848 I think videos like that help teach you *about* a book but they don't really help you *learn* the book. Like you don't learn the things the book has to teach you via those videos. They did remind me of plot points I forgot though. I am not sure the 1984 video mentioned "doublethink" for example, but that's a major part of the book.
To get an idea of how much you miss: here's another 1984 video. How much was different between the two? Which ideas were only in one of them? If those two videos are that different, how much was in the book that wasn't in either video?
As far as like reminders / summaries go, the first two are okay, but I don't think they're a replacement for the books.
thoughts on Anthem and 1984
Watching the video deroj linked on 1984 made obvious some of the major similarities it has with Anthem. like the protagonist journaling, language/thought control, a love affair, and a spark of rebellion.
I first thought that one big difference was a happy vs sad ending -- Anthem is hopeful, 1984 is not. But the reason for that, and a more significant factor, is that Rand writes about Heroes, but 1984 and similar books are about normal ppl. That's why her books end hopefully, and books like 1984 don't. I think that's also why ppl think Rand's books are unrealistic: *ppl don't believe heroes exist*. (curi's talked about ppl's complain that Roark isn't realistic, which is where I'm drawing some of these ideas from.)
A world without heroes is sad. It's a world where you can't be exceptional, you can just be better than average. A world in which you can't hope to make breakthroughs and progress, you can just push the needle a bit. It's a world for Sisyphus and sisyphites.
> *ppl don't believe heroes exist*
Hmm, not sure about that. e.g. look at how ppl treat Musk. I think I'm missing something.
> I think videos like that help teach you *about* a book but they don't really help you *learn* the book. Like you don't learn the things the book has to teach you via those videos.
I agree. I think it depends on what kind of book it is and who made the video summary, but generally I think one can pick up some major themes of a book in these kinds of videos.
My goal was to learn a little bit more *about* the books, but I didn’t want to read them. They seem very depressing to me. I do not like to engage with depressing stuff.
#18851 People commonly don't think they or a regular person can be a hero. Partly this seems like an excuse and partly like they see a huge gap between themselves and *high social status persons*. In various ways I think people confuse heroism, or virtue or merit in general, with social status, and expect the two to match.
People don't explicitly deny the possibility of an undiscovered hero in general, but most people sure aren't going to be the one to recognize that. (The two passages about pretzel's and middlemen in The Fountainhead are relevant.) And they will get offended by a bunch of traits that violate social rules, even if they'd ignore or forgive those traits in a person who already had social status. People's judgments re conformity tend to be very biased. Social rules aren't very objective. People mostly try to make their own conclusions about social status fit whatever they think that a lot of other people already concluded.
> I agree. I think it depends on what kind of book it is and who made the video summary, but generally I think one can pick up some major themes of a book in these kinds of videos.
Yeah. Videos like that can be useful, but they're not a replacement. I watched a few more 1984 videos after those 2 last night, and if you *are* going to watch summary videos like that I think it's well worth watching 3 or 4 or more. Sort of like a second opinion on medical stuff. You don't want to get stuck b/c of one persons bad ideas.
> My goal was to learn a little bit more *about* the books, but I didn’t want to read them. They seem very depressing to me. I do not like to engage with depressing stuff.
Yeah, that seems a fair goal.
Depressing? IDK. In some ways they're the opposite b/c those things *aren't* happening, but I think I see what you mean. They're about people being broken and suffering and having things they love or value being taken away, etc.
> I do not like to engage with depressing stuff.
If that's because you don't like the impact that stuff has on you, I think that can be solved with mindset.
> partly like they see a huge gap between themselves and *high social status persons*.
Yeah, this makes sense.
> In various ways I think people confuse heroism, or virtue or merit in general, with social status, and expect the two to match.
Now, I think that ppl say Rand's heroes are unrealistic b/c in her novels heroism and social status don't match (often the opposite). It's almost like ppl judge e.g. Roark as unrealistic b/c they think that a real-life version would have high social status. (They probably also think that real-life Roark would compromise to achieve things of magnitude -- and they don't know how to tell the difference between that and Great achievements.)
> People mostly try to make their own conclusions about social status fit whatever they think that a lot of other people already concluded.
Quotes from *The Fountainhead*
part 2 ch 10:
> The battle lasted for weeks. Everybody had his say, except Roark. Lansing told him: “It's all right. Lay off. Don't do anything. Let me do the talking. There's nothing you can do. When facing society, the man most concerned, the man who is to do the most and contribute the most, has the least say. It's taken for granted that he has no voice and the reasons he could offer are rejected in advance as prejudiced--since no speech is ever considered, but only the speaker. It's so much easier to pass judgment on a man than on an idea. Though how in hell one passes judgment on a man without considering the content of his brain is more than I'll ever understand. However, that's how it's done. You see, reasons require scales to weigh them. And scales are not made of cotton. And cotton is what the human spirit is made of--you know, the stuff that keeps no shape and offers no resistance and can be twisted forward and backward and into a pretzel. You could tell them why they should hire you so very much better than I could. But they won't listen to you and they'll listen to me. Because I'm the middleman. The shortest distance between two points is not a straight line--it's a middleman. And the more middlemen, the shorter. Such is the psychology of a pretzel.”
and part 4 ch 1:
> Kent Lansing said, one evening: “Heller did a grand job. Do you remember, Howard, what I told you once about the psychology of a pretzel? Don't despise the middleman. He's necessary. Someone had to tell them. It takes two to make a very great career: the man who is great, and the man--almost rarer--who is great enough to see greatness and say so.”
> > People mostly try to make their own conclusions about social status fit whatever they think that a lot of other people already concluded.
FH with my emphasis:
> “Peter, you’ve heard all this. You’ve seen me practicing it for ten years. You see it being practiced all over the world. Why are you disgusted? You have no right to sit there and stare at me with the virtuous superiority of being shocked. You’re in on it. You’ve taken your share and you’ve got to go along. You’re afraid to see where it’s leading. I’m not. I’ll tell you. The world of the future. The world I want. A world of obedience and of unity. *A world where the thought of each man will not be his own, but an attempt to guess the thought in the brain of his neighbor who’ll have no thought of his own but an attempt to guess the thought of the next neighbor who’ll have no thought—and so on, Peter, around the globe.* Since all must agree with all. A world where no man will hold a desire for himself, but will direct all his efforts to satisfy the desires of his neighbor who’ll have no desires except to satisfy the desires of the next neighbor who’ll have no desires—around the globe, Peter. Since all must serve all. A world in which man will not work for so innocent an incentive as money, but for that headless monster—prestige. The approval of his fellows—their good opinion—the opinion of men who’ll be allowed to hold no opinion. An octopus, all tentacles and no brain. Judgment, Peter? Not judgment, but public polls. An average drawn upon zeroes—since no individuality will be permitted. A world with its motor cut off and a single heart, pumped by hand. My hand—and the hands of a few, a very few other men like me. Those who know what makes you tick—you great, wonderful average, you who have not risen in fury when we called you the average, the little, the common, you who’ve liked and accepted those names. You’ll sit enthroned and enshrined, you, the little people, the absolute ruler to make all past rulers squirm with envy, the absolute, the unlimited, God and Prophet and King combined. Vox populi. The average, the common, the general. Do you know the proper antonym for Ego? Bromide, Peter. The rule of the bromide. But even the trite has to be originated by someone at some time. We’ll do the originating. Vox dei. We’ll enjoy unlimited submission—from men who’ve learned nothing except to submit. We’ll call it ‘to serve.’ We’ll give out medals for service. You’ll fall over one another in a scramble to see who can submit better and more. There will be no other distinction to seek. No other form of personal achievement. Can you see Howard Roark in the picture? No? Then don’t waste time on foolish questions. Everything that can’t be ruled, must go. And if freaks persist in being born occasionally, they will not survive beyond their twelfth year. When their brain begins to function, it will feel the pressure and it will explode. The pressure gauged to a vacuum. Do you know the fate of deep-sea creatures brought out to sunlight? So much for future Roarks. The rest of you will smile and obey. Have you noticed that the imbecile always smiles? Man’s first frown is the first touch of God on his forehead. The touch of thought. But we’ll have neither God nor thought. Only voting by smiles. Automatic levers—all saying yes ... Now if you were a little more intelligent—like your ex-wife, for instance—you’d ask: What of us, the rulers? What of me, Ellsworth Monkton Toohey? And I’d say, Yes, you’re right. I’ll achieve no more than you will. I’ll have no purpose save to keep you contented. To lie, to flatter you, to praise you, to inflate your vanity. To make speeches about the people and the common good. Peter, my poor old friend, I’m the most selfless man you’ve ever known. I have less independence than you, whom I just forced to sell your soul. You’ve used people at least for the sake of what you could get from them for yourself, I want nothing for myself. I use people for the sake of what I can do to them. It’s my only function and satisfaction. I have no private purpose. I want power. I want my world of the future. Let all live for all. Let all sacrifice and none profit. Let all suffer and none enjoy. Let progress stop. Let all stagnate. There’s equality in stagnation. All subjugated to the will of all. Universal slavery—without even the dignity of a master. Slavery to slavery. A great circle—and a total equality. The world of the future.”
> Yeah. Videos like that can be useful, but they're not a replacement. I watched a few more 1984 videos after those 2 last night, and if you *are* going to watch summary videos like that I think it's well worth watching 3 or 4 or more. Sort of like a second opinion on medical stuff. You don't want to get stuck b/c of one persons bad ideas.
Good point re watching multiple videos.
> Depressing? IDK. In some ways they're the opposite b/c those things *aren't* happening ...
IDK. To me that sounds like lowering the standard in some way. In a "~things could be worse" rather than "~things could and should be better" kind of way.
> but I think I see what you mean. They're about people being broken and suffering and having things they love or value being taken away, etc.
You are correct. It's this part that I find depressing. I think it influences me negatively (makes me less happy) when reading / watching it.
> If that's because you don't like the impact that stuff has on you, I think that can be solved with mindset.
That's a fair point. I don't think it is a big issue for me that would require me to put in dedicated work into changing it. I can read / watch some stuff of this sort ("dystopian realism", see below) without it having big and long lasting negative effect on me. Like I wouldn't enjoy it, but I wouldn't be depressed for long after I quit reading / watching it.
I haven't dedicated much thought to this but I think that it's possible to split the dystopian stories into at least two bigger genres (or maybe styles is a better word for it?): "dystopian realism" and "dystopian romanticism".
The former would be something like "capturing the moment of everyday life in a dystopian setting" (mainly misery of some sort) and the latter would be more like a "success story in a dystopian setting". I do not think that I find the latter depressing.
I do not think that a "success story" necessarily needs a happy ending, but it need some kind of greatness. I'm not sure. I remember liking Cyrano de Bergerac, for example, despite it not being a happy ending kind of story.
A thought on life, focus, and bottlenecks: often ppl focus on non-bottleneck parts of their life. Why? I think one reason might be that it's very difficult to "fail"; like, even if someone doesn't meet their goals, they don't experience much of a downside. So they can 'take away' only positive things. But did anything really get better for them?
> A thought on life, focus, and bottlenecks: often ppl focus on non-bottleneck parts of their life. Why? I think one reason might be that it's very difficult to "fail"; like, even if someone doesn't meet their goals, they don't experience much of a downside. So they can 'take away' only positive things. But did anything really get better for them?
Sometimes the reason something becomes a bottleneck for someone is that it’s a thing they’re confused about or not seeing the importance of or dishonest with themselves about or have negative emotions about. All of these are reasons they wouldn’t focus on that thing.
plans about posts I'd like to make in Dec
Some posts I would like to make within the next week:
- postmortem: Max's December and lack of FI output
- postmortem: "wrong (and aggressive and combative)" -- this will be a postmortem on https://groups.google.com/g/fallible-ideas/c/rDEtQgxqkY8/m/BYfRRzoAAQAJ including reflection on ET's 1st reply and AF's 1st reply. /1747#18947 is involved in my reflections on aggression.
- continuing collaborative writing thread with AnneB
when are corporations like static societies?
I've had this thought a few times, I think, but don't think I've written it down.
I think there's a qualitative difference between companies wrt observable progress, similar to static societies.
The idea is v similar to what DD says in BoI, and goes like: a static corporation is one where project length exceeds typical employment duration. i.e. progress isn't noticeable over a typical engagement period.
team membership is related too, b/c if ppl do secondments and things then the team loses knowledge.
when projects take longer than the typical employment duration then it means knowledge is being created and lost within the timescale of the project, and *that's the norm*.
I think this makes the problem fairly self-evident in an FI standards.
I'm not sure if this is typical for large corporations or not, but I suspect it is. There are some exceptions, but for the most part this seems to (partially?) explain why progress is so hard at those sorts of companies. and projects take a long time too b/c there's huge amounts of legacy and pre-existing commitments, like keeping all the existing software/apps working, etc.
sometimes I get stuck writing post mortems for big things. That's b/c I want to say what the mistake was *and* explain the situation enough that my solution can be understood and judged against my reasoning. I also want to expose my reasoning to criticism but that goal can be met with a separate post, too.
sometimes corrective action (avoiding a mistake in future) is to stop doing something. these are tricky post mortems to write b/c you need a good explanation to know the full extent of what you should not-do anymore; it doesn't work to just say 'I won't do that again' if you have a really narrow idea about what 'that' is.
I'll often start writing but I get stuck with things like topic-distractions or worrying about the quality of the thing I'm writing. Sometimes I'll want to get down some thoughts about something else (mb just tangentally relevant) and end up pursuing other ideas for hours. that can be good with freewriting (particularly because you can get re-distracted so you write on a lot of topics) but it makes writing about focus-demanding topics much harder.
some thoughts on mutual goals, scrum, and software dev
scrum is a system for attaining mutually held goals. the fact it's used to build software is coincidence. software has some universality. that universality means that component tasks in software are unpredictable and can come from anywhere (e.g. doing quantum mechanics or art theory or stuff about how butterflys fly). if scrum is capable of being a framework to develop any software then it must also be a framework that can be used to achieve any mutually held goals.
what would it look like if scrum was used to play *among us*? (it'd be an utter failure; not that other software dev methodologies would do any better) => it's not very resistant to incompetent/dishonest/malicious ppl (note: ppl are often incompetent or dishonest but rarely malicious).
why should we use a methodology for building software when it doesn't work for lots of other situations where *a system for attaining mutually held goals* is needed?
maybe scrum does work for software (a bit), but by chance rather than by design?
if you use good thinking methods when you think about goals, then you'll form the foundational ideas with a better *structure* (reflecting the thinking methods). bad *structure* means that the negative impacts of goal-modification were -- sometimes -- entirely avoidable if only the goals (and the component ideas of those goals) were better structured.
is it possible that, practically speaking, *most* of the negative affects of goal-modification are avoidable?
are goals made with scrum seriously flawed? are most ppls ways of thinking about goals flawed? are there well-known/common better ways? are there known better ways?
~~ some of the times that goals change frequently will have a big impact when that change could have you might otherwise be able to avoid that impact almost entirely (even if goals are updated frequently) just by structuring your goals better. ~~
what makes scrum better than alternatives? some things are possible in scrum that aren't (or don't happen) with other systems. notable points:
- everyone *must* be involved in planning which means there's a place for ppl to say thoughts about what gets planned (crits, ideas, alternatives, etc)
- there *must* be time dedicated to discussing "how things went" which means there's dedicated time for self-criticism and self-judgement (at the team level particularly)
- review and retrospective (retro) are separate. the review is where the team + stakeholders "collaborate about what was done in the Sprint" and "collaborate on the next things that could be done to optimize value" (scrum.org).
- review is a chance for: updates on what's done (but not status updates); benediction; praise; msgs of appreciation, etc.
- retro is a chance for: team self-judgement; "What will we commit to improve in the next Sprint" (scrum.org); ideas about team self-improvement; voicing of concerns, feelings, crits.
- ~all you can really do is give ppl a chance to do/say something (can't force ppl) -- provided you're in a rational environment what more can you do to elicit ppl to say relevant stuff?
- team communicates every day by default (or at least regularly and frequently) -- tho encouraged to share bare-min during the all-hands standup.
- scrum only handles teams of 3-10 ppl
- dedicated roles for scrum master and product owner
- sorta geared around in-person comms
- geared around synchronous comms
- PO is an authority; there's a single point of failure in the product owner (and mb in scrum master too, depending on the team; i think a team can manage better without a SM than without a PO using vanilla scrum)
- if something can't be brought up in standup then you're sorta not meant to bring it up till the review/retro
- encourages ppl to solve issues outside scrum => i.e. using some other methodology; it's not generally specific about which issues or when
- presumes there's some bigger structure around the team that provides guidance/endorsement/protection
- not good for ultra-high-cadence projects -- in one day you can't do 6 standups + planning + review + retro
- not good for short projects -- scrum presumes an ongoing cycle of sprints where each sprint is 1-4 weeks approx
- not good for async projects or volunteer projects (which tend to be async)
- time boxing review/retro means some things don't get discussed or resolved
- scrum does not encourage that the team uses an unbounded method for solving internal problems (b/c of time boxing and other constraints)
- scrum doesn't help organise work -- left up to other preexisting systems/ideas
- scrum advocates taking notes and records of meetings but teams don't use those records for anything; what's the point? scrum provides mainstream answers to those qs but doesn't integrate records/minutes into its process (which is also pretty mainstream).
topic learning report: photography
I spent a bit of time in December learning about photography. I hadn't had any experience before that (besides like taking average photos with phones, etc).
I feel like I've crossed a bit of a breakpoint wrt both my understanding of some fundamentals and my ability to do photograpy well. i wouldn't say that i'm good at it, though. i've taken some good photos (and mb some good video, but that's harder) though there's luck mixed in; i can't do it consistently.
here is a summary of notable points:
* i've reproduced the retina effect in bladerunner (reflections off the retina) and have had to figure out some practical stuff that i couldn't find documented. i've also found some common misconceptions around the effect that professional film makers have gotten wrong (those errors weren't constraining, though).
one example of such a misconception is that you can do colors other than the yellow/orange associated with the effect. you can't. that was easy to show: I just held colored gels in front of the relevant light source. the effect all but disappears for colors outside the red-to-yellow spectrum (v slight with green and ~0 for blue). cats eyes are different though; depending on the angle you can get lots of different colors. the colors remind me of an opal.
i also figured out how to get the effect to be active over the full frame of the image, not just a small section, and what properties you need in a lens to do that (smaller lens diameter is better).
there are some other practical considerations too, like which materials work as semi-reflective surfaces to split the light source and which orientations work or produce side effects or don't work at all. i came up with some rudimentary rules about what to do, when, how, etc.
* i took a self portrait today that I think is a good photo. I'm quite happy with it. it's my discord profile photo now and here's a larger version. There's no post-processing other than setting the color levels in photoshop (since it was taken in raw format). I hadn't use photoshop for that before so dragged some color level things around to bring out the highlights and stuff in the photo (which could have been done in-camera, too). I think that made squareish visual artifact in the top left a bit more prominent but i'm not too fussed about that atm. It was taken with an 85mm f1.4 lens with a 0.71x speedbooster (so equiv to a 60mm f1.0). the photo is actually of an acrylic semi-transparent mirror that I was using to investigate 'why are reflections in your eye always in focus but reflections in a flat mirror aren't?'
* today I figured out an answer to the above question (took about 3 hrs).
if you try and take a photo of the flat reflection of something (in a mirror; reflected from a puddle/lake) then you have focus and depth of field like taking a photo of the subject directly.
but if you take a photo of someone's eye, then you'll notice the reflections in someone's eye are also in focus. why are curved mirrors special?
i first thought there might be something to do with our eyes being wet (mb something to do with the refactive index), but that didn't seem right and would be hard to test, anyway.
i googled a bit and found a photo of bubbles and noticed the reflections were in focus there too (the bubble was also in focus). i then looked into spherical mirrors. as it turns out spherical mirrors are a bit too interesting and ppl usually talk about fun stuff rather than principles.
eventually i just watched a tutorial on convex/concave reflections and that taught me how to calculate the image created in a curved mirror. i used the back of a spoon to investigate and test some predictions.
i found that, for convex mirrors the image on it actually *isn't* all in focus at once. different objects' reflections will have different focus. however, you're not focusing on the object anymore, but the object's image. in a convex mirror, the image that's created is virtual and diminished. particularly, the infinite distance between the mirrors surface and infinity is compressed to a finite length. i think that length is distance between the surface and the center of curvature, but not 100% on that and haven't done the maths.
what's actually happening is that, since the image created by the mirror is compressed, you don't need a very large depth of field to (practically) get everything from the surface to infinity in focus at once.
so it looks like everything is in focus at once when it isn't.
i think you could use this as the basis of a new way to do macro photograph, but not sure about that. there are some other problems to solve, too. if that idea works out then you could take photos which aren't possible currently (there's an australian guy who invented a lens that can do some similar unique stuff but it seems like those lenses are like uncommon or expensive or otherwise inaccessible).
the things i've learnt during my tutorials with curi and the CF course have been incredibly valuable. they're helpful in ~every aspect of my life - whether that's picking up a new hobby (photography) or providing more business value. here are two small examples about the latter from the past 48 hours:
* i've been helping an old client with some technical architecture/design work as they prepare to hire a firm to implement some software. compared to mid-2020 and prior, i'm much more effective at providing value (advice, planning, design). i'm also much better at communicating my ideas and why certain things are good/bad/wise/risky/etc. part of being more effective is that I'm better able to focus on the decicive issues and tell which issues/topics are the right ones to pursue and when it's right to focus on them. i received a private msg of thanks and appreciation after a recent meeting due to how i handled some things -- particularly involving: avoiding design mistakes, person-to-person communication, what we should focus on around key design elements, and methodology to go forward and not be stuck. my methods for handling those things are heavily based on content in yes/no, the CF course, Goldratt, and other ideas that curi has either come up with or exposed me to. without curi's tutoring and other philosophy products i wouldn't have been able to provide that value.
* after i got my new keyboard, i replied to an email from one of ZSA's customer service people with some early feedback. the original email chain was about a free game that came with the keyboard, but the customer service guy said something like 'let me know what you think of the game and keyboard' at the end. i took some notes while going through their intro docs and send through half a dozen dot points. none of them were like big game-changing things, but the feedback was geared towards customer experience, quality of their docs/tools, and some nice or frustrating things. again, i used methods i learnt from curi (and Goldratt, particularly) to figure out what was worth giving feedback about, what the feedback should be, and what suggestions i should make along with the feedback (there were a few win-win type synergies in there, like solutions that solve a problem and objectively improve things, even if i was wrong about the problem being a problem). coming up with all those sorts of ideas (from what to focus feedback on, to easy, low-cost win-win improvements) is now easy for me, most of the time. it's easier because curi helped me learn better thinking methods and some really good tools to use. today, the CEO sent me a thank you email for the feedback, and to let me know that he was there and listening for any future feedback (I mentioned i would keep taking notes and send them thru if i remembered). it felt good to be able to provide valuable feedback, with very low overhead to me, and for a company to recognize that and let me know they appreciated it. i like their products and i hope they're successful. i wouldn't be able to do that sort of thing easily and quickly (meaning i probably wouldn't do it at all) without the things i've learnt over the past 6 months.
in the scheme of things, these two examples might not seem like much. but they're representative of much bigger and substantive changes in my thinking and ideas, and they demonstrate value and effectiveness *at will* that i would not have been capable of otherwise. they're also very recent; both in the sense that they're concentrated (over the past 48 hours), because what i've learnt is so general and so useful; and in the sense that the improvement in my effectiveness has been noticeable very quickly, relatively speaking.
I wrote: a crit of UEG's *The Fall of a Society* and a mini essay: YouTube and Twitter are anti 'western values'.
integrating TOC w/ software dev
integrating TOC w/ software dev: I think scrum optimises certain local optima. this comes at the expense of global optima. one example of local optima (that gets optimised) is *velocity*. I think standups might be too, though need to think a bit more about that. I think constraints might be involved software dev, too, in more ways than are explored in *the phoenix-project*. also need to think more about that, tho.
I'm really excited at the moment and feel like I'm having a lot of ideas. I want to work on something substantial but it's midnight. I also want to write down my ideas (this being one of them).
I think writing down my ideas is good, and working on something substantial now is probs bad (bc of how it would affect sleep). I think I'm going to try doing one, and avoid doing the other, so that I don't end up coding till 5am. I can code any time (roughly) and I can't finish in 5 hrs. I could do something, but would I be a better coder now for 5 hrs, or tomorrow after sleeping? Probs tomorrow. Writing, though, is good to do now because I actually might forget new ideas.
#19550 i did manage to avoid doing any substantial/long work, but didn't manage to get to sleep before 5am anyway.
microblogging is really good -- or, at least, it's really good for me
microblogging is really good. like i think it's a much better habit to get in to than blogging. at least for me. (most of this isn in 2nd person, but it's based on self-reflection)
blogging is hard and it's hard to get started. if you want a high quality blog then it's easy to avoid publishing anything. you can also end up avoiding publishing stuff because you don't think it's good enough. sometimes that inaction is a mistake, and you might have avoided it if you had better thinking methods.
blogging is also a good way to: build up a track record, publish ideas, and get criticism. those are all things that can help in improving one's own thinking methods.
so building a blog can help improve thinking methods, but a lack of good thinking methods can get in the way of building a blog.
(before your thinking methods have improved enough, you can easily spend too much time on topics that are too hard, which can slow your progress, too.)
microblogging is a totally different mindset -- at least that's what i've found. no message is too small, but also, large posts are okay too. it's nice b/c there's fewer ideas getting in your way when you're thinking about making a post. you don't have to get every post perfect; not every idea needs to be fully developed; if you post something that you change your mind about, well, you probably didn't post a big long lengthy thing about it, so the mistake is smaller and doesn't affect things as much. It also exposes your ideas to criticism early, and helps you get ideas clearer in your mind, earlier, too. That way, it's easier to find problems before you spend too much time on things, instead of after.
I've averaged like 2 posts a year on my old blog. when i wrote a post i needed to wait for a moment of inspiration, or more recently i was lazy and only posted videos of me giving a talk in person or online. i've averaged something like 1 post a day in this thread (that'd be like ~120 of the currently 186 posts, excluding this one). there are conversations and things mixed in with ~mini-essays, but I have written (and am writing!) more than I've ever written before. it feels great.
I'm excited for the new community site b/c I want to start a permanent blog there (and redirect my old site to that new blog).
easy/quick way to practice learning and achieving goals
Here's an idea for ppl wanting to practice learning. It's quick, like 60-90 minutes in full. The end goal is to get to learning step 2 (consistency) of juggling 3 balls. getting to step 3 takes a lot longer, but still doable on a weekend sorta thing.
**You could learn to juggle.** It's a good way to practice the 3 steps of learning.
I learnt to juggle like 15+ yrs ago as part of a sports science class. It was demonstrating learning via parts vs learning everything at once. Half the class went in one room and tried to learn to juggle 3 balls, all in one go. The other half (which i was in) learnt to first "juggle" one ball, then two, then three. Success rates were 10% and 90% respectively (~10 ppl in each class). this was an elective subject, so everyone there wanted to be there and had decent hand-eye coordination and all that. by the end of the hour-ish exercise, I could maybe get like 6-20 throws in a row before dropping one of the balls.
The first video I found on learning to juggle was demoing this method. The method looks good and is better than the notes I wrote on the method applied to juggling. She has some good extra bits in a 'troubleshooting' section, too.
The host doesn't mention where she's looking, but it's probably worth noticing that she's always looking at the apex of her throws, not at her hands or down low.
Tensegrity Explained - Steve Mould
I think there's a big problem with furinture - it's heavy and hard to move. IMO there's a decent business opportunity for anyone who can solve this and mb some other problems too. Tensegrity sounds like a reasonable approach to start with. The two other starting points I know of are: modularity + flexible construction (so the same parts could make a chair or a table); and something like structures that could be filled with water to add mass (and drained for moving). There are some major issues with the second one, tho. It's a bit of a hard problem b/c ppl like the sturdyness of furniture (for the most part).
Camping furniture (especially camp chairs - Google it if you're not familiar) are commonly implemented with some tensegrity like structures where fabric supplies the tension and hollow rods supply the compression.
They're light & easy to move as well as reasonably comfortable. But as you mention light & easy to move is also a downside. A good gust of wind can move them, or if you sit down with too much lateral force the chair will move or perhaps tip over.
Water is problematic because the structures to hold it are either bulky or fragile, and when water sits things tend to start growing in it. Dry sand is better about sitting for long periods, but otherwise similarly difficult to hold structurally. Depending on location, it can also be significantly harder to get than water. And in the case of both water and sand, getting it in and out in the contest of an indoor setting is difficult and probably messy.
Outdoors, you can use ground stakes to substantially improve stability of camp furniture. Most people don't bother with that because they won't be leaving it in the same place for very long. The indoor equivalent to stakes would be bolting to the floor. That would work great for stability, but just trades difficulty moving the furniture itself for difficulty (and damage) moving the bolt holes.
A less permanent stake-like option that would also be suitable indoors would be something like weightlifting dumbells attached to the leg bottoms. 4 X 10 Kilogram dumbells at the bottom would add 40 Kilograms of stability with extremely low center of gravity so would be super hard to tip over. Yet, when detached you're only moving things that are 10 kilos and relatively small - not especially difficult unless you're frail. But you wouldn't want to use standard shaped dumbells because people would stub their toes on them. You'd want something tapered from the top to a rounded off edge at the bottom. Or some kind of soft bumper along the edge.
> A less permanent stake-like option that would also be suitable indoors would be something like weightlifting dumbells attached to the leg bottoms.
Yeah, I was thinking that pairing something like that with a tensegrity table/etc might be a good option. You could make weights cheaply and have standard mounts the weights screw in to (or connect however). Ppl could adjust their preferred mass that way. And it'd be cheap to produce plastic ones to fill in the gaps. Similar to how some mice have weights you can add/remove.
There's one other issue I can think of WRT tensegrity furniture (atm at least): deconstruction for transport. They don't look like the easiest structures to (un)assemble. Mb with enough slack on the ropes that sort of solves itself, though?
> There's one other issue I can think of WRT tensegrity furniture (atm at least): deconstruction for transport. They don't look like the easiest structures to (un)assemble. Mb with enough slack on the ropes that sort of solves itself, though?
Camp furniture solves this by providing hinge points in the compression elements, usually with some sort of locking mechanism. When the hinge is unlocked and folded, the compression element(s) no longer provide compression. Without offsetting compression, the tension elements have slack and the structure substantially collapses.
I sometimes think about InternetRules or GISTE (others, too) when they haven't posted for a while. I hope everything is good and they come back.
(I haven't been checking FIGG regularly, so might have missed something there, but i think it's been quiet for a while. mb discord, too, bc there are some channels I don't read in full)
#19604 Hmm, I'm not sure I understand which camping furniture you're talking about. Here are some examples of what i was first thinking of:
okay, thinking through what you said again, mb I do get what you mean. Was a bit hard to think about without looking at the structure again.
Are they *full* tensegrity? Mb, like definitely if you compromise the fabric they can't bear weight. so I guess it's a bit unintuitive because, unlike the table example in the vid, you're not putting weight on a rigid part; there are no rigid surfaces. Mb the intersection points are not ~proper tensegrity either, but you could easily replace them with e.g. small loops of rope or something that could hold them via tension (and coat the rope loops in rubber to keep them in place).
Who really cares about 'full' tensegrity tho, I just remembered the relevant part at the end of the video, where it's explained that the demo table isn't even 'full' tensegrity bc some of the aluminium is under tension.
praise, appreciation, and benedictions
this isn't about FI specifically, but I talk a bit about us in the first half.
praise is a weird thing in FI. I agree with curi that people ~withhold it here when they wouldn't otherwise. There are good and bad things about that, in large part because there's good and bad praise. Praise is somewhat related (at least commonly) with something like endorsement; mb something a little gentler than endorsement. If you're because cautious, it's easy to think that -- bc of that relation -- praise can be like a bit dangerous, like if you praised something that had mistakes. IDK the solution, but I think that's accurate so far.
Appreciation is sort of like a superset of praise (tho mb there are some things that are praiseworthy but not good to appreciate). I appreciate a lot about FI and discussions here, but not every part of those discussions is *praiseworthy* even tho it is *appreciated*. I think -- if you wanted to take action to make FI a ~friendlier place -- appreciation is a safer/easier way to do it than praise (tho that shouldn't mean we like exclude praise or whatever).
Benedictions are a thing I only learnt about recently. The word refers to like the nice stuff a priest will say at the end of a church service. 'may you be blessed in all you do' or something. more generally I think it's a good description for lots of the stuff we say, esp to lots of ppl at once, like 'hope you all have a good break' between classes.
In general, I think of benedictions as like less than praise and appreciation, but also more general b/c of that. *Benedictions have reach.*
*What is praiseworthy? What is appreciation-worthy?* These are questions that are related to morality. Good answers to those questions have a lot of morality baked in, but different moral knowledge is baked in to each. I think the same applies to benedictions.
I think benedictions are so general that good answers to questions like 'What benedictions are worth making?' will have some very deep and fundamental moral truths baked in. The benedictions with the most reach are *things you can wish people that apply to all people and all situations, all the time*. Put in that context it almost seems obvious that deep/far-reaching moral knowledge needs to be baked in.
So good benedictions are ~special in a way that other well-wishings aren't. It seems significant.
On new years (very shortly after learning about benedictions) I had an idea for a video that was basically me saying benedictions the whole time, interspersed with some of the philosophy/ideas above (IDK what I would have done with editing/b-roll or whatever). The title would have been something like 'benedictions for 2021; a video for everyone' I wrote 3000 words of benedictions as a sort of draft-script while I was playing with the idea. There are some longer paragraphs that were making more of a point than just the benediction (about like society or democracy or living a good life), but there are like 100+ things on the list. Stuff like: (paraphrasing from memory) 'i hope you learn new things' and 'i hope you figure out the places that you get stuck and ways to avoid getting stuck on those things'. there was also stuff like 'if you die, i hope you get to do so on your own terms' and 'i hope you don't get sick'. there was more stuff like access to running water, living in a democracy, don't accidentally contribute to the destruction of your society, learn humility, able to reliably take care of your family (like for parents). If anyone is interested I think I'm okay posting it.
I think making FI microblogs more accessible can significantly help ppl stay with FI. Like it can help solve some major reasons that ppl leave / give up / get stuck. I think FIGG lacks *accessible persistance in combination with freedom*. I think the new CF forum is a significant opportunity in this regard.
Maybe I don't understand tensegrity precisely. I was thinking a tensegrity structure is one that bears weight through a combination of compression and tension, rather than rigidity alone.
Regardless, I think the structure in the video could be made to set up and collapse as easy as a camp chair.
It might help to imagine what would happen if the silver "arms" on the inside top and inside bottom of the table in the video had lockable horizontal hinges in the middle and at the table (red disc) attachment points.
So when the hinges are locked, no difference from the video behavior. But when you unlock the hinges, now the arms can fold over on themselves vertically and lie flat against the inside table faces.
When that starts to happen, the tension goes out of the wires, and the table collapses into two discs with the arms and jumbled wires in the middle. If you twist one of the discs while the structure is collapsing it wouldn't even be a jumble - the wires would be approximately straight along the edges.
To reverse the collapsing process you pull the discs apart and re-lock the hinges.
#19621 you can get millions of Likes on TikTok by saying stuff like "you got this; you're gonna make it; don't give up". different phrasing, and some visuals, but verbal content along those lines.
tangent: i think you could get even more Likes if you said "i love you" (especially for unmarried women) in a socially calibrated, non-creepy way. if ppl can think "things will turn out well" applies to them, and personalize it when hearing it, why not "I love you" too? if they can feel stuff from "you're a strong person who will succeed" from a recorded video, why not from a more emotionally charged and desired statement? and getting a loser 18yo boyfriend to say "i love you" enough (and dealing with him in general) is a big hassle. maybe there's a popularity-market opportunity here.
#19637 Telling people they're beautiful, in a pre-recorded video, having never seen them (and probably never will) is also popular.
#19637 After posting this, I was sent a TikTok that says "I love you" (~romantically), you're beautiful, your ex boyfriend was wrong, etc. It has 120k likes:
#19640 The hashtags on that one are notable re target audience:
> Who has been there?🙋🏻♂️ #relationshipgoals #ex #exes #singlemom #youarebeautifiul #over30 #over30club #over40 #over40club #divorced #queen
There are also 3500 comments. tiktok seems to be displaying comments as images to prevent copy/paste or something. the first one (for me atm) says it's not always an ex, it can be someone you're currently with too. the second one complained this video didn't fully solve their emotional abuse problem. the third said their husband doesn't show love enough.
#19641 There's a lot of things I'm finding surprising about TikTok; did not think that age bracket would be there or popular.
FYI I don't think they show comments if you're not logged in.
> you can get millions of Likes on TikTok by saying stuff like "you got this; you're gonna make it; don't give up". different phrasing, and some visuals, but verbal content along those lines.
It seems like there's a lot of common ~benedictions ppl use, but common ones like those -- I don't think they're good benedictions. There are lots of lies ppl expect from social situations (like 'you got this' -- you might not). I think there are ways to be nice to ppl where you don't have to lie, and you basically get the same thing across. Some topics are harder tho, where like a more full on idea is needed.
comparing work to speedrunning
this is an exploratory post.
is there merit in comparing work to speedrunning? I'm considering this particularly WRT programming, but I think it applies more generally, too.
I guess the intuitive/typical reaction is that speedrunning=rushing and rushing=>mistakes. That seems like a good prediction of what most ppl would *do* if they tried to speedrun their work. I don't think it's a good criticism of the comparison, tho, b/c it ignores what *the best ppl* would do.
What does *the best ppl* mean? Let's answer that for speedrunning before work.
The best *speedrunners* are able to: *learn a new game quickly* and *practice effectively* so that they *are competitive quickly*. The best speedrunners don't *rush the learning*, but they do *learn effectively*. What's the difference?
The difference between *rushing learning* and *learning effectively* is that *rushing is slower and error prone*; rushing ends up *increasing your error rate* rather than *decreasing it*. A *low error rate is required for effective learning*.
The structural difference between *rushed* and *effective* learning is the same difference between seeking *local optima* and *global optima*. Seeking local optima will sacrifice long term effectiveness for a short term goal, which is what *rushing* does. Seeking global optima sacrifices short term goals (which are ultimately insubstantial, anyway) so that you are effective over the *lifetime* of meeting your goal.
For a speedrunner, the path between *novice* and *expert* can be *long* or *short*. A longer path means that learning was less effective, and a shorter path means it was more effective.
**What is the shortest distance between *novice* and *expert*?** The shortest path is the path of global optima; the path of effective learning.
The difference between *good* and *bad* speedrunners is *the difference in each's shortest path*. A good speedrunner, having practised learning effectively, will be able to approach global optima more quickly than a bad speedrunner, even if the bad speedrunner isn't rushing.
A confronting and difficult reality is that *no individual learning experience is enough for a bad speedrunner to become a good speedrunner*. That is: *effectiveness takes many attempts*, but the *result* in each case is largely outside the control/scope of the relevant event/process.
How does this apply to work?
Being *good* at *speedrunnning work* means that you can power-up effectively in new situations. Being *smart* or *gifted* means you did that once for one thing.
(other things to do now, hopefully that's enough to see where I was going)
#19773 I think this idea could be used to make a good job interview. My first guess at how is something like:
* You get a linux terminal, sort of set up like a CTF or similar.
* The challenge is incrementally revealed to you as you progress; starting with like a welcome type prompt.
* I think the goal that the applicant is presented with should be given fairly upfront, like not indirectly. You want to see how well the person responds to clear goals, not indirect / subtle type goals (so don't obfuscate it).
* The goal could be changed/modified part-way through, mb in response to what the applicant does (but that sounds complex), or just through the applicant finding out more info. That will probably happen naturally tho, so probably no need to go out of your way to make it happen. (also you don't want to interfere too much with the applicant)
* the purpose is to see how they adapt to things that are somewhat unfamiliar; not too much tho. like the linux env should be std.
* you can gague stuff like:
** can they configure an environment quickly enough to get like zsh + oh my zsh + nvm + pyenv + etc set up => this means they can be more productive in the remaining time.
** can they navigate unfamiliar tooling or find appropriate tooling quickly?
** can they debug a programming language they haven't used before? what techniques do they use?
** how many rabbit holes do they go down? what's their self-judgement like?
most scientists rush
I think one of the effects of mainstream scientists' errors in thinking is ~rushing (in the sense I use it in #19773). Scientists end up rushing to try and ~confirm theories. They don't understand the role of conjecture and criticism.
How this works:
1. scientists think that *~checkpoints* exist when papers get published.
2. scientists are selective about projects b/c they want projects that can be published; they don't want to publish self-criticisms/postmortems -- b/c they (and, more importantly, academic elites) see them as failures.
e.g. they don't want to publish a paper about an idea that they came up with, an experiment they did, and a refutation they found. it's like they think that, because they did all 3 of those things, the idea/postmortem/etc isn't worth publishing. if they only did 2 of them (e.g. idea+experiment, or experiment+refutation) then I guess they'd think it was worth publishing. (doing all 3 is just a special case of idea+refutation, which they also don't think worthy of publishing.)
(1) + (2) *seems* to work for making progress b/c other ppl are involved; like the collective work of lots of ppl doing (1) and (2) *happenstantially* results in progress. Some collection of people could do (1)+(2) *indefinitely* without making progress. The reason scientists, particularly, find some success with (1)+(2) is that they have other ideas that contribute to scientific progress.
Scientists often try to find local optima in (1) and/or (2). The result is that they sacrifice global optima. That's bad and slows progress.
context for writing 'most scientists rush' and some stuff about dark matter and gravity
Some context for writing #19784: (the above isn't a direct response to the context, just an idea I while...)
I watched New evidence AGAINST dark matter?! by Dr. Becky because I wanted to see if there was some new criticism and I was interested in maybe doing some philosophy about the video (depending on content). I expected to find major errors. Here is an example of one IMO (at 8:20 - 8:37)
> Now having said that, that does not mean that, you know, we are going to just tear up the theory of dark matter and throw it out the window. Because this is just one piece of evidence; it does not out-weight the giant pile of decades worth of research and evidence we already have in favour of dark matter.
Her errors in thinking are serious and those errors prevent her from doing better science and being a better science communicator. :(
(she says some okay stuff immediately after that quote about reproducibility and using more data; tho the paper in question was 4-sigma so IDK if more data is that important unless some idea about what/why inconsistency would be found; N=153, but these 153 galaxies being special would be like more 'worrisome'/interesting IMO.)
The effect that the titular paper claimed to discover was a correlation between the rotational speed of galaxies and the 'local' galactic density. Like: tightly packed galaxies rotated slower (or faster, can't remember) than galaxies with no 'nearby' galaxies. This is a correlation predicted by MOND and the External Field Effect. Mainstream dark matter theory predicts the absence of such a correlation. N=153, 4-sigma. IDK much more than that.
I wrote SMB1 Local Optima, the checkpoint-goal method, premature optimization, a conjecture about when optimizations are worth doing, and potenatial risks in *Alisa Discussion*
I posted this msg to the YT channel that inspired my macro photography idea
My goal with sharing this letter is to expose it to criticism.
I think I have a new way to do ~macro photography with ~large aperture + deep DoF. I don't know if there are practical considerations that I'm ignorant to, but my test seemed to show some promise. Also I have only been doing photography stuff since early Dec, so I expect this method is harder than I can guess ATM.
The method: Do not photograph the target; photograph a reflection of the target on the outside of a spherical mirror. You'd then need to un-distort the image in post. I don't know if it's possible to reverse the distortion using only lenses; I suspect it's impossible without additional mirrors.
Convex mirrors create a virtual image between their center of curvature and their surface. That virtual image is a "compressed" projection of everything between the surface of the mirror and infinity. That means your depth of field only needs to cover the distance between the surface of the sphere and the center of curvature, which is far less than the distance to infinity.
I used the back of a spoon for tests, and found F4 with the Sigma 16mm DC DN was able to get everything from the fingerprint on the spoon to the far wall (several meters away) in focus. I've since acquired a variety of stainless steel spheres ranging from 2cm to 15cm in diameter, though haven't done more tests yet.
(I think it was the Sigma 16mm, from memory -- can check exif but probably not important.)
Note: I realise that this might not technically meet the definition of "macro photography", but I think it can still work for photographing small subjects with deep DoF.
It took me about 24 hours to come up with the idea, and 3 hrs of researching how mirrors worked (optics/physics).
The inspiration for the idea was thinking about "why are reflections in ppl's eyes in focus?"
I am a philosopher, though my thinking methods have improved enough only very recently, so I am a new philosopher. I posted about my discovery on Jan 5th here: curi.us/2380-max-microblogging#19371 (there is also some stuff about replicant retina reflections; I've also made some discoveries there, I think).
I had intended to make a video demonstrating this new method but I do not have enough spare time currently, which is why I'm telling you directly here.
I've very much enjoyed your videos and I hope you make more.
I'm willing to answer any further questions you have regarding the above.
All the best,
For context Nikolas asks -- in the video I responded to -- if anyone knows how to achieve something similar to what I think my method might be able to achieve.
I posted to the YT "members" section for Media Division (the channel), so the comment isn't public AFAIK.
Some exploratory writing made as I watched "These Pools Help Support Half The People On Earth", Veritasium's most recent video. (Spoilers)
Some exploratory writing made as I watched "These Pools Help Support Half The People On Earth", Veritasium's most recent video.
**Spoilers** are below re: the Q in the video about what the mysterious colorful pools are for. You can watch up to 1:50 of the video without spoilers.
I originally considered naming this post "Veritasium, a channel I enjoy less and less as Derek has gotten better at doing his business."
Timestamps for my pausing and notes are below.
Derek asks Q, paused at to guess.
Stepped nature of pools (like rice paddies) might not be significant except that it separates them.
Colors -> algea? rare earth metals or dissolved salts?
I know rare earth metals are mined in the US a bit, and that China's rare earth mines produce a lot of toxic byproducts (it would make sense to put them in a desert away from ppl for treatment).
How could these pools support half the ppl on earth in some way? ~4b ppl. Tech makes sense. Semi conductors <- rare earth metals.
Environmental effect? Unlikely b/c they're artificial pools. Also the difference in colors implies there is some significant reason that they're separated but close together. Makes sense re: treatment of waste or purification of the metals.
Not sure if anything else is coming to mind. Local effects don't make sense. I guess it involves large scale economics and specialization/trade.
potash -> Davis discovered new metal element -> Pot-ash-ium
Ahh, so I was on the right track w/ dissolved salts and purification?
Hmm, maybe I was wrong about not enjoying Veritasium; tho I have some issues with the video still, including the intro and Derek's "I did not expect that" reaction and how it was shown at 4:20 - 4:40. I am not particularly interested in what I predict to come after 6min, tho.
Yup, I was right that I wasn't interested in the stuff that followed. I appreciate that Derek gave us the answer at the beginning of the video; like, it wasn't really clickbaity.
a bad video: ARI's "An Objectivist Perspective on Love"
I posted to discord:
> [10:39 AM] Max: I think it's notable how trash the second result is. (And it's from ARI, no less.) It has ?v=l2tYLKzF02Q if you care to watch it. Transcript: https://pastebin.com/1gH534eS
> [11:41 AM] JustinCEO: you should say something about why specifically you think it is trash @Max
> [11:41 AM] JustinCEO: even if other people agree, they might have different reasons/see different issues
Before I get into my reasons, I want to point out the contrast between the 1st and 2nd result in the screenshot.
The first result is "Ayn Rand on Love and Happiness | Blank on Blank". It is a slightly 5-min edited excerpt from Rand's 1959 interview with Mike Wallace and has a stylised animation. The animation seems okay to me. I think the excerpt from Rand's 25-30min interview is well chosen and reasonably complete, tho I last listened to the source about a month ago, so might be mistaken.
I think Ayn Rand's response to Mike Wallace is excellent. I think Mike Wallace was not a particularly good interviewer and was combative. IDK if there's significant context about the 50's that I'm missing, tho.
The second result is "An Objectivist Perspective on Love", an 80sec excerpt from How to Fill Your Life with Meaning (Dave Rubin Interview with Yaron Brook and Gena Gorlin)
. I have not watched the source material, tho scrubbed through to find that the clip is sourced starting around 41min 25sec.
I think the title "An Objectivist Perspective on Love" is dishonest and misleading because it is not an objectivist perspective on love. Gena's answer starts in the wrong place (art?) and does not capture even a tiny fraction of the depth and nuance in "Ayn Rand on Love and Happiness". Gena's answer roughly goes: art is a symbolic embodiment of possibility -> "love is the actual experience of being in such a world and having such people populate your world" -> falling in love with a person is like the "two-sided equivalent" of falling in love with an artwork -> the other person embodies ideas in their person e.g. the ways the laugh and move, the career they pursue, they're the "external concrete embodiment of what makes life worth living for you" and they reciprocate -> the "dual experience" is a "core component of a life ..." (... mumbles, mb 'worth living' or 'long relationship', IDK).
I think Gena doesn't understand Rand's views and misrepresents objectivisim, though there might be missing context that means the excerpt is not representative of Gena's understanding. Some of the stuff she says isn't bad, but some is, and I think it's, at best, a slightly-better-than-mainstream response.
It is possible that ARI's choice of how to frame the clip is doing a disservice to Gena, in which case I put the major blame on their content team, rather than Gena. I am ready to reserve judgement re: Gena if other FI ppl think she's okay.
I think Gena's ideas and response conflict with Rand's ideas in these ways:
* I think it conflicts with Rand's idea that we should love selfishly
* and that it conflicts with Rand's idea that we should love people for their virtues
I think the topic is a very broad and important one wrt morality, and it's a topic that should be dealt with: either in sufficient depth (for most ppl), or concisely with sufficient clarity if one is able to express the ideas with comparable clarity to Rand (if one is able to *do justice* to her ideas). Because ARI is dishonestly promoting a video that claims to offer insight on this topic, but offers none, ARI is making a major philosophical and objectivist error. The way ARI deals with this topic is of both insufficient depth and insufficient clarity.
It's also a dangerous topic (curi's comments re: Cupid flinging arrows comes to mind, ancient Greeks recognised the dangers of mishandling romance). That only exacerbates the issues above, especially because Rand's views are *good* and *moral* and they can *help people*. ARI's video helps no one.
#19804 When I responded to Justin in Discord I added this, which I think is substantial enough to post here:
> I also might not have emphasized this fact enough:
> These are the top results when you search (unquoted) for the name of *the greatest moral philosopher* and *a near-universal and enduring part of human life; the mismanagement of which has caused more pain and suffering in peace-time than anything else* (at least AFAIK).
> I think Gena's ideas and response conflict with Rand's ideas in these ways:
> * I think it conflicts with Rand's idea that we should love selfishly
> * and that it conflicts with Rand's idea that we should love people for their virtues
I agree with these two.
Another difference between the two clips is that Gorlin says love is two-directional and Rand doesn’t. I don’t see why love has to be two-directional. You could love someone’s virtues without them loving your virtues. You could love someone for virtues that they have and you don’t have but wish you had.
Makeup vlog 1: https://youtu.be/HcpoQIJWVio
crits welcome ofc.
Opinions on fair use / copyright stuff welcome too. The vid was immediately identified under content ID which means there will probably be ads on it. I don't really care, tho. I think ppl should show ads on their videos if they can bc otherwise google has an incentive to de-rank those videos. (no adrev)
#19826 your makeup isn't very current. watch some makeup tiktok.
#19827 Do you have any suggestions? Can I search tiktok, like type in "makeup" and get meaningful results? I've never downloaded the app or used it outside of being linked 2 or 3 times.
This was the first attempt and I have a small variety of stuff to work with - like less than I will have tomorrow.
thx for the pointer.
I like the idea of you doing makeup for a while to see what it's like.
I'm confused by the video in the middle. Why is it there and what does it have to do with the topic?
> I'm confused by the video in the middle. Why is it there and what does it have to do with the topic?
You mean the scene from *Don Jon*?
It's an example of a straight woman that I think would not appreciate her male partner doing this sort of thing. I included the full scene because: I thought it's a good scene for demonstrating particular social dynamics; I didn't want to compromise the artistic integrity by showing less than the full scene; and there were more relevant aspects in the scene than I anticipated. I remembered the 'it's not sexy' line WRT vacuuming and that's why I wanted that scene.
When I cut the recording of me I realised that I didn't leave much of a gap between the sentences before and after the scene, and b/c of that I think the edit is a little jarring and there isn't enough context after the scene ends and I cut back to me. I didn't think it was a big issue tho -- maybe an error on my part but I didn't think it was a constraint.
The vlog videos are not meant to be as high production quality as like fully public videos I make. That's why they're unlisted. I think I can make a playlist of unlisted videos to make them a bit more accessible, tho. Right now they're only public b/c I post the link here (and share it privately with a few friends as well who are interested in philosophy to some extent).
I agree with you mostly, btw. I was thinking of replying directly with my thoughts but didn't think I had much to add. The one caveat is this part:
> You could love someone for virtues that they have and you don’t have but wish you had.
I'm not sure it's good to wish stuff like this, but I think I see what you mean. I think you can love someone for virtues they have and you don't have without wanting those virtues for yourself. e.g. I guess that Ayn Rand loved Frank O'Connor for virtues around his art that she didn't necessarily want for herself. But maybe that's getting overly specific, and there's a more general underlying virtue like 'integrity' that (I guess) she thought they both shared.
makeup vlog 2
makeup vlog 2: https://youtu.be/p67Cr0idCUE
still uploading but should be available in 5-15 min.
> You mean the scene from *Don Jon*?
makeup vlog 3
makeup vlog 3
I changed my mind on daily videos - I have a list of topics that I can talk a bit about and will do a daily vid as long as I have topics on that list. Today was *cost*.
makeup vlog 4 - negative emotions
I did today's makeup vlog about negative emotions. there's a bit of basic discussion about makeup + negative emotions but mostly I just read aloud fallibleideas.com/emotions.
so it's a bit different. it's also heavily edited b/c the raw video is like 2.5x as long and is mostly ambient sounds of makeup and stuff. i thought that was too long and too boring. I cut out a little bit of what I said (in the uncut recording) but I didn't think any of the stuff cut was substantive.
at ~21:33 in my makeup vlog thing today ( https://youtu.be/Bga0A_1SG0E?t=1279 ) I say (after reading fallibleideas.com/emotions)
> Elliot was better at this stuff 10 or 11 years ago than I am today.
I think there's some bad ideas I have behind that statement. I'm comparing Elliot to me, which I shouldn't do (cargo culting?), and I'm comparing progress/learning/ideas in one specific area with some implication that this is like meaningful. Like it's an important and significant thing that I'm not as good at that.
Also, I compare like elliot-years-ago to max-years (or something) which is a bad way to look at it too. a better way is that I've been doing this for like 6 months and change, and at that stage Elliot had been doing it for ~10 years already. If I put it like that it sounds fine, but the way I put it in the video sounds bad.
You asked for ideas for topics for your makeup vlog. I’m interested in what social pressure about makeup looks like. Where/when/how do women and men get socialized to think it’s important that women wear makeup?
In your fourth makeup vlog, you talk about feeling “a bit shit” and you seem unhappy, especially towards the end. You talk about Elliot’s Fallible Ideas essay about emotions. But you don’t relate the two (your negative emotions and the essay). That’s not really in the scope of a vlog about makeup, but maybe it’s something you want to do on your own or write about elsewhere.
makeup vlog 5 | it's not easy being green (or orange, or purple, or w/e) | color and mean comments
I'm rendering today's makeup vlog now. It's 11:30pm for me and it took considerably more time to do today's vlog. I increased the production quality with clips, annotations, edited sequences, and some music at the end (~7 minutes of me doing makeup and some final comments at the end).
I am making today's video public (rather than unlisted).
#19860 Thx. I think I will do a high production quality video on this. A bit of today's video has some relevant stuff (a Simpsons clip with some short analysis).
#19862 I give some brief comments on this, too. I think it's good for me to clarify if there's any ambiguity and that I should do that in the same venue the source material comes from (i.e. these videos).
I made some self-criticism on the FI discord (and mirrored this in the YT comments):
in my makeup vlog today (#5; https://youtu.be/oSeDWKjSX8c?t=242) I mention that Trump was mistreated b/c he looked a bit orange.
at 4:22 I say:
> ... **and, you know, it's not what I would choose to look like, but at the same time** I think that mean-spiritedness is something is... it's very hard -- and we shouldn't want to do this anyway -- but it's very hard to isolate that to like an individual person.
I think I was wrong to say that and actually promoted some of the things I was criticising. I, like, had to mention that I thought it ~distasteful before I said the attacks were wrong. I didn't need to say that I didn't want to look like that; I added it in there for social reasons, and that was wrong.
In my first makeup vlog at 9:30 I mention razor burn and the motion I make at the time indicates shaving against the grain -- that's a bad thing to do. I learned that shaving against the grain is a thing that (mb) causes razor burn (I linked the relvant vid in ep 2). Since watching the shaving vid I haven't shaved against the grain, and so haven't used the motion I made in vlog 1.
makeup vlog 6 | visiting Mecca (a makeup store)
makeup vlog 6 is live.
@Anon #19827 - what do you think of my recent makeup?
You're the only person who's given me some criticism on the "look" I do in one of these vlogs; I'd appreciate more crits/comments if you're willing. thx.
In today's makeup vlog at 6:08 I say "the drug store" in a different (and deeper) voice than the surrounding sentence. I think I'm implying something negative about the idea of calling a pharmacist/chemist a "drug store". It's not a common term in Australia (like, no one ever uses them term, tho ppl know what it means), and I think my change of voice implies something negative, either: about the idea of calling it a "drug store"; or about ppl who shop at a drug store for makeup e.g. poor ppl, whores (referenced in vlog 5), lazy/careless ppl, etc; or mb just like an anti-american attitude in general (I hope not this one).
Anyway, I think changing my voice like that was bad and reflects something (which I don't fully understand atm) that is not a good thing. Like some bad or wrongly-judgmental attitude.
Ideas and crits (i.e. help) welcome
#19880 my sister gave me some feedback today (only quoting a bit b/c I didn't like specifically ask)
> Wow!! You actually look so great Max!! [...]
That was after I made #19880 but it's relevant so thought I'd mention.
makeup vlog 7 | feeling 'naked'; makeup and self-consciousness
makeup vlog 7 | feeling 'naked'; makeup and self-consciousness
I corrected something *twice* re: Mecca and budget/sales
In yesterday's vlog at 4:55ish I talk a bit about how much I spent at Mecca (the makeup store) and my budget.
The first correction is via text in the video (made in the edit) where I give a more precise description of what I told the sales rep about my budget.
The second correction is in the YT comments:
> The total I spent at Mecca today was $669, so with the cleaning products mentioned at 6:40, plus the brush-cleaning-silicone-utensil ($15), I came in under $600, so I think that Mecca shouldn't be judged poorly for budget stuff; like I'm complicit in spending more than what I initially planned to, so one can't make a judgement about like makeup stores trying to fleece ppl or anything like that. AFAIK they were respectful of my budget (as much as any sales rep can be expected to be). One thing I did not like was that, at the POS, they didn't have a customer-facing screen so I couldn't keep a track of products as they were scanned. I had to wait for them all to be scanned to hear the final price.
> Also, I hope they have some sort of commission system b/c the woman who helped me was really nice and I appreciated her good customer service.
I think it's notable that I felt compelled to do **two** corrections there. I make occasional corrections on minor stuff like this, but I can't remember the last time I needed to make 2 corrections on the same topic.
In ep 2 of my vlog I talk about saying a line ("have you ever wondered what it's like to wear makeup") each entry so that I can cut it together for a good intro in a post-project video. I've forgotten to say the line in my recent entries.
However, if I just use decent editing techniques (like introducing a voiceover before content) then I can just cut together whichever clips I like from each entry without worrying about saying the line each time. The effect (a good intro for the viewer) still happens. I think it was a middle-level error to emphasise saying the line in my plans as much as I did. By "middle error" i mean that it could have been a major error if I focused on it more than I did, and it might have been a constraint at some point (goal-dependant), but as it turned out it was a minor error (doesn't affect much).
I have some thoughts on major and minor errors and how you can tell the difference. If anyone is interested in those things particularly, LMK and I'll focus on writing about that sooner than I might otherwise.
Some crits of HeroLFG's post "Casualties in the Information War"
The introduction has two sentences:
> People need to understand and agree with the goal of a system.
What's the context here? All systems that ppl use? All systems? Designed systems? etc
In any case; I don't agree. I think ppl and systems can have different goals; what matters is the overlap. Provided that the actors' goals and the system's goals agree *enough* then things can work fine. Some goals are better than others; some goals have *reach*.
Take a traffic/road system. Let's ask some basic questions:
- What's the systems goal?
- What are the goals of the commuters?
- Do commuters need to understand things about traffic systems to use roads effectively?
I don't think ppl need to fully understand and agree with the goal of a traffic system to use it. They just need to understand *enough*: that they can use it to get from A to B. Provided the commuters have a goal like that (getting to work, the shops, home, etc) then it works out. The system might have other goals like: having multiple routes between points, having efficient routes so that it doesn't take too long to travel places, making sure there are enough lanes on the road to accommodate traffic merging and splitting, etc. Ppl don't need to know anything about these other goals to use roads.
> If people ignore the goal of a system, then the system is assuredly being wasteful and counterproductive.
I think this is problematic too, like these are the qs that come to mind:
Assured by what? By the people ignoring the goal of the system? What if the ppl using the system ignore the system's goals but have their own goals? Can the *"misuse"* of a system be efficient or must it always be inefficient?
Similar to above, I think exploring these qs will both provide examples of system use and misuse that isn't counterproductive (it doesn't need to be perfectly optimal) and show that ppl don't need to know/consider the goals of a system to use it. They only need to know enough to know how they can use it to achieve their goals.
so far, at the end of the intro, I think I have some major disagreements with HeroLFG and I don't know much about what the blog post is meant to be about; like the title is "Casualties in the Information War" but the intro is about ppl knowing about the goals of a system. I also don't have a sense for the audience or purpose of the post.
because the purpose, audience, and topic of the post is unclear I think it's reasonable for ppl to stop reading b/c the lack of clarity is the first major error in the post.
makeup vlog 8 | YT makeup tutorials and the best makeup tutorial channel
Try doing a more modern makeup look?
Are you using being male + wearing makeup as an excuse? The makeup you're doing is unrealistic for a woman to wear in public today. And there are men who wear makeup but you don't look similar to them either.
> Try doing a more modern makeup look?
How do I know if a makeup look is modern or not? I did some research this morning but it's not that much clearer to me. I found some mistakes I was making too, but IDK how they relate to modernity. (like my eyebrows are the wrong shape, which I could have avoided if I'd plucked them differently)
> Are you using being male + wearing makeup as an excuse? The makeup you're doing is unrealistic for a woman to wear in public today.
Excuse for what? Doing an outdated look or doing an unrealistic makeup routine or something else?
IDK what you mean be unrealistic, also. I see women down the street with full coverage + lipstick + eyeshadow. Is there something else that's unrealistic about what I've been doing?
I am planning on doing a lighter makeup routine soon, tho IDK if you would think that's more modern or more realistic.
Someone I know is studying some CF stuff. They made a comment about being more confident in their decisions b/c of thinking-methods stuff. I replied with the following and I wanted to post it here b/c I think it helped me learn/clarify some CF stuff too:
> [... I quoted a thing here about being confident in decisions ...]
I don't know if you had the following in mind or not, but I think there's something pretty substantial here. There are two forms of confidence that occur to me that matter:
* confidence in a decision -- that it's a right / acceptable / good decision.
* confidence in *the error correction method associated with criticism and thinking*.
The first one is the typical, common sense, 'everyone gets it' meaning. The second one is the subtle one:
Confidence in a decision isn't just about the decision being correct. We're fallible creatures and there's ~always *some* knowledge that might exist that would change our minds.
Like, for any situation where you have type 1 confidence it's usually easy to think of new crazy-edge-case type knowledge that would change our mind. That demonstrates the fallibility, but we can't predict what the change-our-minds type knowledge would be for a given situation (in general: new knowledge is unpredicable and surprising; if it weren't, then it wouldn't be new!)
If you have **type 2** confidence, tho, then criticisms are *always helpful*. If you have confidence in your *methods* then a crit will always help you improve your decision; because a criticism is a reason that your previous judgement (idea) wouldn't work! *This* type of confidence is super-powerful, b/c it's the confidence to pursue a goal without fear. If you only have *type 1* confidence then -- being fallible creatures -- we can never be sure that an *individual decision* is right or not. But having *type 2* confidence means that *our methods of error correction* are good enough to handle *literally anything that reality can throw at us* and there *are no existentially fatal criticisms*, there are only *helpful* crits.
makeup vlog 9
You spent a lot of money on make-up without researching it first. Over $1200. Enough that you have to make lifestyle changes this month. Then you found out later that you made some bad purchases. E.g., in vlog 9 you said Mecca matched your foundation wrong. Or in vlog 6 you said you spent a lot of money on drugstore & amazon makeup, and then went and spent more money on Mecca make-up and thought you should have done Mecca first. It would have made more sense to research first before spending a lot of money.
You said you bought stuff off amazon without colour matching it first. But a lot of amazon makeup is available in drug stores. People can go colour math in drug stores first, and then buy it off amazon later. (At least they can in America. I don't know if Australia is different.)
Also I don't know why you are so negative about drugstores.
> I think my change of voice implies something negative, either: about the idea of calling it a "drug store"; or about ppl who shop at a drug store for makeup e.g. poor ppl, whores (referenced in vlog 5), lazy/careless ppl, etc; or mb just like an anti-american attitude in general (I hope not this one).
The one you hope it's not is the anti-American one. Do you think the other ones are all fine? Why do you assume people who buy drugstore makeup are poor or lazy/careless? Do you think not wanting to spend $1200 on makeup makes someone "poor"?
Also, why use the word "whores" at all? That's not a neutral word.
#19925 Thank you!
> It would have made more sense to research first before spending a lot of money.
Yes. There are two things that I think are important here:
1. My surprise at how expensive it was is, I think, a worthwhile thing to learn (and something I didn't predict)
2. I'm fortunate enough that $1200 will not have a terrible impact on my life. I was prepared to spend $ to do this, and I chose to prioritise other things (like my time) than doing research on buying makeup efficiently.
Because of these two things, I don't think it was a major error to spend $1200+.
> People can go colour math in drug stores first, and then buy it off amazon later. (At least they can in America. I don't know if Australia is different.)
Re Australia being different: sorta. Our drug stores are smaller. I've been to a CVS in SF and a Wallgreens in Vegas, both were much larger than any Aussie chemist I've ever seen. There are other places I could have gone to color match first, tho (e.g. Myer, which I referenced in like 2 or 3). In chemists they usually just have a printed pallet and some samples, but the samples can be unreliable. I could have sought more assistance from relevant friends or family members too. I definitely didn't do this the best way that I could have.
I also like the idea of paying for a product at the place you do color matching. Mainly this applies to the first instance. It could be an error in my thinking but I think it's also something that's low impact so I haven't thought much about it.
When I went to Mecca, part of the reason I was prepared to spend more $ was that I wanted to do these two weeks decently (not making too many compromises), and I had already decided that I didn't want the project to extend much (or at all) beyond two weeks.
> Also I don't know why you are so negative about drugstores.
If you're referring specifically to the comment/self-crit I made; I don't know either, that's part of the reason for the self-crit. If you mean a point in the video (other than me saying "drug store" in a funny voice) I would appreciate you linking me. I want to understand, too.
>> I think my change of voice implies something negative, either: about the idea of calling it a "drug store"; or about ppl who shop at a drug store for makeup e.g. poor ppl, whores (referenced in vlog 5), lazy/careless ppl, etc; or mb just like an anti-american attitude in general (I hope not this one).
> The one you hope it's not is the anti-American one. Do you think the other ones are all fine?
No. I think the anti-American attitude would be worse, tho, because it would point to a more substantial error in my thinking. I think the other errors would be easier to fix. I might be wrong, but based on my estimate of how big a problem it is, I think the anti-American attitude is a bigger issue.
> Why do you assume people who buy drugstore makeup are poor or lazy/careless?
I don't; the list I provided was meant to be brainstorming (so like an "or", not an "and"). They're attitudes that I guessed I could have, rather than things I think I believe. I apologize if that came off differently; that wouldn't be good, and I am interested in avoiding that sort of error in the future.
> Do you think not wanting to spend $1200 on makeup makes someone "poor"?
Not at all. I know I'm doing things from scratch (so spending a lot at once) and I know I haven't tried to spend money efficiently. WRT buying from a drug store, AFAIK that's the cheapest place to buy makeup, so *if* I have something against ppl who buy makeup at a drug store, then it might be that I actually have something against poor ppl. Without knowing what that attitude is, I can't fix it.
I think it's notable you didn't quote the last part of my self-criticising comment:
>> Anyway, I think changing my voice like that was bad and reflects something (which I don't fully understand atm) that is not a good thing. Like some bad or wrongly-judgmental attitude.
>> Ideas and crits (i.e. help) welcome
I specifically ask for help on this issue. I don't know if you're intending to be helpful or not, but I want to mention that at this point. You have helped me reflect from some PoVs I hadn't done so from previously, tho. Thx.
> Also, why use the word "whores" at all? That's not a neutral word.
The use of "whores" is a reference to the clip from *The Simpsons* I played in 5. I agree it's not neutral. In hindsight I don't think I provided enough context to justify using it; like if someone was reading that comment without watching ep 5 then it will seem like an error, whereas I might have been able to make the comment -- just as effectually -- without using the word.
> #19925 Thank you!
>> It would have made more sense to research first before spending a lot of money.
> Yes. There are two things that I think are important here:
> 1. My surprise at how expensive it was is, I think, a worthwhile thing to learn (and something I didn't predict)
> 2. I'm fortunate enough that $1200 will not have a terrible impact on my life. I was prepared to spend $ to do this, and I chose to prioritise other things (like my time) than doing research on buying makeup efficiently.
> Because of these two things, I don't think it was a major error to spend $1200+.
Even if it made sense to spend $1200 and to prioritize saving your time, it could still be an error to have spent $1200 *in the way you did*. I'm not saying it was - just that this is a possibility to consider. For example, maybe you should have spent some portion of the $1200 on hiring someone for a consultation.
> Even if it made sense to spend $1200 and to prioritize saving your time, it could still be an error to have spent $1200 *in the way you did*. I'm not saying it was - just that this is a possibility to consider. For example, maybe you should have spent some portion of the $1200 on hiring someone for a consultation.
Ahh, yeah. gp. I think I have some bad ideas still about things like money and paying ppl. paying for a bit of consultation didn't occur to me; maybe because I though I could have asked some women I know and gotten some advice for free? Hmm.
** --- (the rest of this post is some free-writing / reflection / notes) --- **
This is mb also related to an attitude I have towards learning, where I like think it's better or more valuable to explore things on my own than bootstrap via other ppl's knowledge. Something like that. This is a bit funny, tho, b/c I am still bootstrapping via other ppl's knowledge (tutorial vids), just not as ~directly as Q&A that I could do via like an hour of consulting time.
There's some merit to that learning attitude generally. like DD made some significant improvements WRT structural knowledge & Popper's ideas. he thought about them in a diff way that had more reach (or something like that); they were more useful b/c of the way that he understood them, and that understanding was possible b/c he didn't do too much like direct bootstrapping from contemporary Popperians.
Mb that learning attitude makes some sense to do for a bit, or if you're a really good thinker and working on hard/complex philosophy, but I think I've just convinced myself that it's not worth doing most of the time (e.g. learning makeup).
** makeup content / cost / social dynamics **
Also, I mentioned vlog 9 at 4:27 that there's like a lack of critical content about makeup; I think if there was more critical content about makeup then that would have helped me spend $ more efficiently.
This relates to some social dynamics in makeup vids too, like the section from Stephanie Bailey that I quote/show in vlog 8 at 6:28 where she claims that most ppl can't afford to wear expensive foundation every day. Those sort of attitudes can suppress good tips like: if you are willing to spend $1000 then buy expensive staple products up front. Also, expensive foundation works out to like $1/day which isn't like totally negligible, but it's pretty affordable if someone wanted it. *Most* of the $ i spent at Mecca was on other stuff where cheaper alternatives aren't as impactful.
no vlog update today
No vlog update today. I am not sure it's worth the overhead to keep producing videos; like I don't think there's that much more to learn that's substantial. (how would I know, tho?)
IDK if I'll do more in the way I've done them. I want to do a full routine a few more times at least (probably on weekend days), but I don't enjoy doing makeup and it's high overhead (both the routine and the video). I will at least make one more video wrapping things up, covering some of the topics I mentioned in 8 or 9. I'll probably leave the door open for further updates if I choose to produce them, but I'm not sure it's worth it.
I think it might have been an error to pursue the cadence that I did. There were good things about that, and I enjoyed making some of the videos, but producing videos is only an instrumental goal (the bigger goal being a track record). I ended up spending more time and energy on the project than I thought I would.
I think the project was successful even if I don't make more videos in the style I did; that is: I learnt a lot.
> I specifically ask for help on this issue. I don't know if you're intending to be helpful or not
yes, I was trying to be helpful. I wouldn't have bothered commenting if I wasn't trying to be helpful.
Your response was long and detailed but failed to actually engage with most of my points, so it was difficult to respond back to. You also seemed defensive, and doubted that I was even trying to help you, despite you opening with "Thank you!"
> I think the project was successful
what were your initial goals? what do you think you succeeded at?
In your first blog, you said stuff about wanting to have a better understanding of women. From what I did watch and read, I didn't think you had a good understanding of my position or experience, or that of other women I know. (Not saying you should continue the project, btw - I don't think you were doing the project in a way that would help you gain an understanding of women either.)
I also don't really think you are interested in discussing it, or interested in the crits that I have. (Not saying you should be - not everything has to be a high priority for you.)
And, again, I am writing this as an attempt to be helpful.
finished *it's not luck*
I finished reading *It's Not Luck*. This is the 4th Goldratt book I've read (the others being the goal, critical chain, and the choice).
I like it. I found the audiobook was lacking in some parts b/c there are some important diagrams early on and about half way through. I'm going to review those sections particularly b/c I want to make sure I understand the methods and can replicate them. I looked at the diagrams (I have the print version too) but didn't follow them closely.
I like the end; there's like a quick-fire recap of the methods used. There's some info that was new (or at least I didn't spot it earlier) like: goals have obstacles; big goals have intermediate objectives (which are in some ordered dependency graph with both parallel and serial parts); each intermediate objective should address some obstacle. That sounds pretty common sense, but the related method is the simplest and clearest method of planning I've heard. (the main part of the method that's missing from that list is the "what are you going to do about it?" question/answer.)
I also like the company strategy stuff in the last few chapters, especially the idea of deliberately leaving excess capacity in market segments to keep a company flexible. The idea is that company resources are general/flexible across multiple segments, then you can prioritise resources to service high-profit segments, tho make sure to always leave some excess capacity in each market segment. When market segments lose demand or become unprofitable, you can always move to another market segment b/c you left excess capacity (i.e. didn't full exploit that segment). It also means that you can avoid over-saturation of a market becoming a bottleneck (which could also mean that a company becomes like over-specialised for a segment; i.e. loses flexibility).
#19944 I want to think about what I should say before I reply fully. I agree that I was defensive. I think some of my greatfulness was genuine, even tho I was a bit defensive. There's a conflict/tension there tho I'm not fully sure of the detiails yet.
right/consistent/fast <-> eureka/chewing/integration
the steps of learning (do it right/once, do it consistently, do it fast) work well for explaining physical tasks like speedrunning, and some think-work like coding (where those 3 steps are sometimes said: make it work, make it right, make it fast). but how do those steps work for learning more abstract ideas (e.g. calculus, theory of constraints, philosophy)
I think the first way of thinking about the 3 steps can work for calculus (but it's harder to see for ToC or phil). With maths you generally get shown a method / proof / formula / etc and then get problem exercises of increasing complexity, and sometimes challenge questions. the exercises are usually: some basic ones first (completing them shows you can do step 1), then some complex ones (completing them shows you can use the idea consistently), then some more complex ones (completing them shows you can use the idea consistently enough to mix with prior/other techniques, mb with some creative thinking). challenge questions push your ability to use the idea (consistently and quickly) s.t. you can use the idea in creative new ways with previously learned ideas (this helps to create/improve structural knowledge). the progression of exercises maps well to correctly, consistently, quickly.
ToC and Philosophy ideas are harder to relate to the 3 steps. I think, roughly, the process follows the same pattern but doesn't make sense with those steps as they're currently named. WRT abstact ideas, what does "do it once" mean? excepting analogies to things like pyramids and structural integrity (like, engineering), how do the ideas of "consistency" and "speed" relate? They *sort* of have natural ~connotations, like consistency <-> 'know how/when to use the idea', and speed <-> 'idea has low overhead / it's intuitively able to be mixed with other ideas'. Even those ~connotations are a bit of a stretch, tho.
I think these steps are a good ~translation for the original steps: realise the idea (eureka), chew on the idea (so that you get a fuller understanding and develop structural knowledge), and integrate the idea (use the idea in new ways and in combination with your everyday thoughts). These steps don't necessarily happen all at once for a ~complete idea. Like "theory of constraints" is a big idea made up of lots of smaller chunks, some of which aren't documented in Goldratt's *The Goal*.
How can we still learn complex ideas, then? When we try to learn complex ideas, we learn them incrementally with smaller ideas. Some ideas are more easily learned than others, which can be due either to better (or more fortunate) structure, or because they're smaller ideas themselves (fewer components), or because they overlap a lot with stuff you already know. When ideas are big *and* unintuitive it means that there are a lot of new concepts that you need to learn all at once. That's hard. Sometimes it get's easier if you realise an instrumental goal that let's you use a subset of the idea (e.g. ToC applied to factories, rather than the full ToC). You can't rely on that, tho; a shortcut like that isn't always available.
The biggest idea you know is basically your full mind, and it's made up of lots of smaller ideas (sub-ideas), which are themselves still massive and complex compared to their sub-ideas, etc.
Learning complex ideas can have a long first step, and might involve some chewing on smaller ideas. When you get the big eureka moment it's because you've integrated the smaller ideas. I think the chewing process for the big idea will help work on integration for the smaller ideas, too (you can still have work to do with the smaller ideas *after* you start using them).
Note: I *think* I am using 'chewing' and 'integration' in the objectivist sense, but IDK. If I am not I would appreciate someone letting me know.
a small contribution to (the discussion/work that is) CF's integration
DD says in BoI something like: some subset of objectively true ideas implies all other objectively true ideas. This make sense; if there's some ~objectively true and complete body of knowledge to describe reality (infinite or not), and progress is moving us towards that body of knowledge, then things that are ~true (in the everyday CF sense: they work) will overlap more and more, like they will imply parts of each other and they can be ~easily integrated.
one way that i thought about this after reading BoI was the idea of a "stand alone complex" (the subtitle of s1 of the anime *ghost in the shell*). The way i imagined things in my mind was like a graph of ideas/explanations that connected to each other. those idea-nodes only connect if they don't contradict. there are like 'bubbles' of ideas that are compatible with one another; all the bounded ones are parochial, but the single unbounded one (finite atm but forever growing and unbounded) is like the 'biggest self-supporting collection of ideas'; the "stand alone complex" (SAC).
integration is like creating stronger interconnections between the idea-nodes in that graph, and reducing the number of nodes via ~unification (not that this lessens the ~magnitude/measure of the SAC of ideas).
Goldratt has a lot of good ideas in his books, and concepts like 'excess capacity and how it relates to constraints' helps us understand the core of falliblism: that we can still 'know' things even if there are problems and ultimately all ideas will be refuted by ever-better ideas. it helps us answer why it's okay to use newtonian gravity in a toy simulation.
This sort of thing has already been integrated and expanded upon by IGCs, breakpoints, yes/no, and curi's other work.
I wrote this in part because I am continually surprised by the convergence between ideas as I learn more about CF and it's antecendents. I love it; it's great.
> Goldratt has a lot of good ideas in his books, and concepts like 'excess capacity and how it relates to constraints' helps us understand the core of falliblism: that we can still 'know' things even if there are problems and ultimately all ideas will be refuted by ever-better ideas. it helps us answer why it's okay to use newtonian gravity in a toy simulation.
Can you explain more about how “excess capacity and how it relates to constraints” helps us understand “that we can still 'know' things even if there are problems and ultimately all ideas will be refuted by ever-better ideas”? That connection sounds interesting but I don’t get it.
Max, do you think you achieved *mastery* of some significant, new things in your makeup project? If so, you could list those. If not, I think you should have higher standards and stop overreaching.
Can you self-evaluate the correctness of any of your new makeup knowledge with similar confidence to your self-evaluations of counting to three or judging whether the word "with" is spelled correctly? Those are examples of what mastery looks like.
The same goes for all your other philosophical work. Keep it simpler. Practice things. Aim for mastery. Aim for a low error rate where correct criticisms are uncommon, surprising and treasured.
Consider what you do have mastery of and build on it. Plan out projects intentionally with goals and trees, keeping issues like mastery and overreaching in mind.
#19963 Where are the 5+ successful past projects at 90% of the size, complexity and difficulty of the makeup project? And at 80%, and 70%, and 60%, etc., all the way back incrementally to simple projects like crawling to a location as a baby.
#19964 You don't have good examples of what success looks for to compare your project to. There's a huge gap from the makeup project to your most similar projects that are clear, confident, decisive, unambiguous successes.
And these are not new things that I'm saying.
Start way smaller, get quick, clear wins, and iterate. Start with multiple successful (micro) projects per day. Finish 100+ in a month with a not-decisive-clear-success rate under 10%. Establish a baseline of what you can do that way and get the iteration started.
re: Makeup Mastery?
> Max, do you think you achieved *mastery* of some significant, new things in your makeup project?
I think I got better at some things (some makeup related, some video production related, some people/social related). Even if I gained mastery over those things, I don't think they're *significant* compared to the size of the project or the project's goals. I think that relative scale matters, e.g. mastering teeth-brushing is significant if the project is like 1-2 hours long (mb split up), but it's not significant if the project is days or weeks long or way bigger in scope.
It's not worth me spending 30+ hours on something just to get a relatively small skill like doing video editing quickly (relative to filming + copying + rendering), or knowing whether lipstick is applied well/badly. It's nice that I have those skills now, but I could have used movies or stock footage and just practiced editing them to learn that skill in a few hours. (I don't think I mastered those anyway, but I am consistent in some ways.)
> If not, I think you should have higher standards and stop overreaching.
It just occurred to me that 'high standards' and 'high ambitions' are, in some ways, contradictory. Or mb not contradictory, just that it's easy to mistake the two and, when they are mistaken, it can be really bad for the project (i.e. it'd fail).
> Can you self-evaluate the correctness of any of your new makeup knowledge with similar confidence to your self-evaluations of counting to three or judging whether the word "with" is spelled correctly?
Not for anything significant, at least. Super basic stuff maybe, but a lot of that is just common sense.
A case in point is my claim that Mecca did color matching wrong (against my face, not my neck). I have an explanation for that, but I only learned it after like 8 days of watching videos about makeup, and I don't know other ways of checking if I'm right or not besides like asking ppl. I have lots of ways of checking 1,2,3 and the spelling of "with".
> The same goes for all your other philosophical work. Keep it simpler. Practice things. Aim for mastery.
I think I lack some of the skills needed to self-eval. If I think about trying to self-eval on e.g. Goldratt stuff, IDK how I can do that besides writing about it to find contradictions or holes in my understanding. Discussion helps, too, but that's not self-eval anymore. Convergence and non-contradiction with other knowledge is good, too, but I think that's more like a hint than self-eval; it doesn't help to decisively find that I'm doing something wrong. Finding contradictions does tho.
I think there might be some like good general methods that could help with self-eval of philosophical work, but I'm not sure of them. Like I suspect they're out there but don't know what they are yet.
Mb writing bridging explanations (roughly: doing integration) is a good way to self-eval some. Feels like that's only a partial strat tho.
> Aim for a low error rate where correct criticisms are uncommon, surprising and treasured.
This struck me as really important.
I think my mindset around criticism still has some serious issues, like I value crits a bit but I can't tell quickly and reliably if crits are correct, surprising, and valuable. Sometimes I can, but those are exceptions, not the norm. I think mb I am still chewing on the idea that crits are valuable/desirable. I know they are in a bunch of explicit ways, but the idea isn't fully integrated w/ my mindset yet.
I think I wasted some opportunities when replying to 'a girl anon' that I could have made better use of, but IDK exactly how.
An issue for me atm is that I can tell when crits are uncommon and surprising, but I don't treasure them. yet (i hope).
Notes: I feel like mb some of my paragraphs that expand on my 'No' answers are like excising things or evasive but I'm not really sure. I wanted to write more than just 'no' and to think a bit about the ways that minor successes are used to justify major failures. If there's dishonesty in those paragraphs then I don't think I know how to expand honestly, yet. Mb they're okay as reflections. i also wanted to reconsider whether I should think of the makeup project as successful via writing things out, and I am changing my mind on that, but there's more to do.
Ctx: I thought off and on about this reply for approx 6 hrs. So I did slow down some. I didn't write down a goal for this reply, tho. It's much easier to think of goals for posts that aren't replies compared to posts that are replies. I guess I put some of my goals for the reply in 'notes' above, but IDK if I had all those in mind before starting, or if I added them later on / retroactively.
kb layout + accidental post -- postmortem
I posted #19969 before I meant to. I don't think it's obvious since I was in the final stages of editing. On my KB layout at the time (I have just changed it now) I had tab next to enter on the right thumb pad. this meant that I could accidentally hit <tab> then <enter> when I meant to just hit <enter> or just <tab>. i've avoided this sort of proximity issue in the past (predicting the error and avoiding some key locations b/c of it), but it's something that I should have known to avoid.
> Can you explain more about how “excess capacity and how it relates to constraints” helps us understand “that we can still 'know' things even if there are problems and ultimately all ideas will be refuted by ever-better ideas”? That connection sounds interesting but I don’t get it.
yes. i think this would be good practice for me so I want to write a bit about it.
first, tho, do you agree with the second bit you quoted? i.e.:
>> the core of falliblism: that we can still 'know' things even if there are problems and ultimately all ideas will be refuted by ever-better ideas.
(I included a bit extra of the quote for context)
If you don't agree with that then I guess that something I write that directly responds to your request would not be very useful to you.
afternoon project - SMM2 level
this is a project plan that should be achievable in an afternoon/evening (today). my method is based on things i've learnt from curi and method at the end of *it's not luck*
**goal:** produce a fun SMM2 level that teaches and demonstrates the 3 steps of learning: do it right/once, do it consistently, and do it fast.
**ctx:** I own SMM2 and have a little experience with the level editor. I have played 10-20 hrs of SMM2 and am reasonably familiar with the game modes, their features, enemies, interactions, etc. I've done a bit of planning and pre-thought about this, including some exploratory level creation last weekend. I drew up a level concept last night that I think can work -- that was something I wasn't sure about for a while.
1- lack of knowledge about vertical levels and camera lock
2- haven't tried to create the level, mb there are inconsistencies with the features I need (e.g. vertical + rising lava in SMW or New SMB modes)
3- haven't uploaded a level before and don't know the process
4- might not have left enough time today
5- it might take me too long to clear the level if it's challenging
6- the level might be too easy so it doesn't effectively teach ppl anything
7- i might not be able to fit enough height in the level to make it interesting and challenging for the player
8- the level is boring or frustrating
9- ppl don't like it
10- i need a level concept+design
11- rising lava might not be fast enough to be challenging (used to demonstrate step 3 in learning process)
12- level isn't challenging
- learn about SMM2 level editing via the in-game lessons and supplementary stuff like YT vids; solves obstacles 1,2,3
- work quickly on things that matter (aesthetics last), pay attention to avoid inefficient creation patterns (e.g. use copy-paste); solves obstacle 4 and partially 5
- integrate quality of life features: good checkpoints, avoid lengthy resets, losing progress, softlocks, etc; helps solve obst 5,8
- 3 1-ups + THX, helps solve obst 9
- ensure there's a decent level of challenge, even for experienced players, helps solve 6,8,9
- use doors sparingly and only where necessary. helps solve 7; note: doors could be added at the end to help with quality of life stuff if I'm under the limit of 4 doors.
- **done** come up with a level design/concept to iterate on, solves obst 10.
- use tracks and saws, podoboos, or spinning fire things to create the necessary time pressure, if required, solves obst 11. i can also research more ways to create time pressure and look at what ppl have previously done (e.g. by searching for specific speedrun levels)
- adjust position of blocks to make jumping up harder if it's not hard enough, solves 12 (could also make the gaps in the ledges shorter to make jumping up a bit harder, or add spikes in strategic places to make the jumps a bit more challenging, put higher standards on the players method, and create tension)
I might not need to solve obst 5,7 and I only need partial success with 8,9 (I don't need to please everyone)
most of the intermediate goals are about level design stuff and already include some knowledge i have about level design. I could do exploratory learning if obst 8,9 is an issue at the end.
at this point I chose to re-read *it's not luck ch 26*. I noticed that there's a q Julie asks that I didn't write in my notes during this chapter: "Do you have enough intuition about the subject?" similar to Alex, my answer is "I think so." after reading the chapter I think my next steps in the project are to plan the order I'm going to meet intermediate goals (and the corresponding actions, but that's fairly trivial and can be done ad-hoc), and then to execute on the plan.
**sequence of actions:**
- do the relevant SMM2 tutorials, identify gaps in my knowledge or things I'm unsure of, and supplement with more SMM2 tutorials and/or watch relevant YT vids
- refine the level design with my new knowledge and decide things like possible game modes and the required maps (I think particular backgrounds/level-themes are required to do vertical levels the way I want to). ideally I should have several options for game mode and level theme combinations that can let me achieve the goal.
- create a prototype level without quality of life features to verify the design can work
- iterate on design if necessary, do additional research if necessary (in general I can do this after any step)
- refine the prototype level and include the quality of life features
- test the level start-to-finish and evaluate it against the goal. iterate as described if need be.
- refine and polish the level, test again, add aesthetic stuff if I have time.
- publish the level.
there are some trivial steps I haven't included like 'name the level', 'write the level description', 'turn on my switch', etc. don't think it's necessary to document those, tho. I can do all that with excess capacity and they can be done at any point (except like turning on the switch, etc).
**level design prelim**
the main tech the level teaches is to reliably climb up a cliff via 1-block wide ledges
note: the 2nd image is of the sub-world and the note on the bottom right that's a bit cut off says "enuf time for player to use pipe to reset".
**meta-goal of the project**
my meta-goal for this project is, having now planned it, to not need to alter the plan but still complete the project. if I fail this goal, then my goal will fall back to altering the plan, documenting what went wrong/what I missed, and continuing with the plan. (I might have to alter it more than once.) another way to put this goal is that this project and plan has a low error rate.
doing this plan took approx 1hr
SMM2 afternoon project success
I went through my plan and successfully uploaded the level 3hr 15min after I finished planning, so 4 hrs and 15 min all up.
level code **GL0-1HH-MVF**
I iterated on the level mostly while uploading, which took 45 min. There were some difficult jumps that I made easier; particularly at the end, and I reduced the difficulty of the 'consistent' phase to be more forgiving to the player since the hazards aren't meant to actually kill the player at that point, tho they can still die if not careful (that's deliberate).
the rest of the level dev was done fairly incrementally; I didn't change much after I put down an initial design and did some short play-testing (to make sure jumps worked and there wasn't any cheese).
I included 2 challenge areas, the first one of which is secret.
The second challenge area is right at the end and can kill the player if they're not careful. if they're really good then they will have been able to keep the mushroom this far, so it's easier.
the entire course, including secrets/challenges, can be done without damage.
I didn't alter the plan, but I think I had some oversights: I didn't plan for experiment/prototyping to overlap with the game-mode/level-theme choice. as it turns out there's a lot of flexibility here so the initial stuff I chose worked fine, but I should have planned some for some experimenting there. realistically it'd probs be okay to include it in the 'prototype' step I planned, but I feel like I should have included it. i count this as a minor error that I should be conscious of when doing future plans.
another minor error I made was spelling 'consistent' with an 'a': 'consistant' in the level. that error was also made in the diagrams I uploaded before.
I am surprised by how smoothly and quickly everything went compared with my past level-creation explorations.
I'm reminded of a line from *It's Not Luck* (ch29):
> We didn’t have time for mistakes, so we had to spend extra time planning!
I recorded a video of the SMM2 level for anyone who is curious but doesn't have SMM2: https://s3.wasabisys.com/xert/2021-02-13_22-44-18.mp4
I recorded it with a capture card but didn't use HDMI throughput so there was a bit of input latency; it's not as difficult/inconsistent as it looks in the vid.
>> Can you explain more about how “excess capacity and how it relates to constraints” helps us understand “that we can still 'know' things even if there are problems and ultimately all ideas will be refuted by ever-better ideas”? That connection sounds interesting but I don’t get it.
> yes. i think this would be good practice for me so I want to write a bit about it.
> first, tho, do you agree with the second bit you quoted? i.e.:
>>> the core of falliblism: that we can still 'know' things even if there are problems and ultimately all ideas will be refuted by ever-better ideas.
> (I included a bit extra of the quote for context)
The statement seems reasonable to me. I can't think of anything better. But saying I agree with the statement seems like it would require more knowledge of fallibilism and epistemology than I have.
> ** --- (the rest of this post is some free-writing / reflection / notes) --- **
> This is mb also related to an attitude I have towards learning, where I like think it's better or more valuable to explore things on my own than bootstrap via other ppl's knowledge. Something like that. This is a bit funny, tho, b/c I am still bootstrapping via other ppl's knowledge (tutorial vids), just not as ~directly as Q&A that I could do via like an hour of consulting time.
> There's some merit to that learning attitude generally. like DD made some significant improvements WRT structural knowledge & Popper's ideas. he thought about them in a diff way that had more reach (or something like that); they were more useful b/c of the way that he understood them, and that understanding was possible b/c he didn't do too much like direct bootstrapping from contemporary Popperians.
> Mb that learning attitude makes some sense to do for a bit, or if you're a really good thinker and working on hard/complex philosophy, but I think I've just convinced myself that it's not worth doing most of the time (e.g. learning makeup).
I think you have some really big mistakes in your thinking here. They are the kinds of mistakes that could be affecting all the other learning you are doing too, so it's worth actually thinking about and figuring it out. It's not specific to the context of makeup (which is where it came up).
I know you identify that you may have a mistake in your thinking. But I think the mistakes are worse/bigger than you realize.
I just wanted to note this because I have seen other mistakes in your writing & learning processes that I think may be caused by this same line of thinking.
re: 19985 by 'trying to be helpful'
#19985 I think what you're saying is that the part of my learning attitude "where I like think it's better or more valuable to explore things on my own than bootstrap via other ppl's knowledge" has problems. (There could be other problems related to the other stuff I said too, but the above is like the essence of it?)
I originally had trouble understanding your msg because I *thought* I rememberd/knew what I wrote, but after re-reading it, I didn't. I thought I had said something more like 'this is a part of my thinking that has a mistake' but really I just said 'this is a part of my thinking'.
Part of the reason I thought I'd said something more self-criticising is that I've talked a bit with curi about it in the tutorials and I've made some posts about it here, e.g. #18021 (I ctrl+f'd for 'exploratory' to find that, but I didn't find other examples, so I may not have posted about it as much as I thought.)
I think your msg is saying that, even if I have posted about this problem and mb recognise it a bit, that it's a bigger problem than I realise and has bigger impacts than I realise, even still. Also that if I want to get better at learning and thinking then I have to address this mistake as somewhat of a priority because it's a major error.
Does this line up with what you meant?
(my goal with this post was to ask the right q's that would give answers that let me figure out if I understood what you said or provide some way fwd if I still didn't understand)
> I think your msg is saying that, even if I have posted about this problem and mb recognise it a bit, that it's a bigger problem than I realise and has bigger impacts than I realise, even still. Also that if I want to get better at learning and thinking then I have to address this mistake as somewhat of a priority because it's a major error.
> Does this line up with what you meant?
Yes, that is what I meant. I know you recognize it as a problem. But I don't think that your view goes far enough. You still think there is merit in doing things from scratch, not building off of other people's knowledge.
I think you noticed a mistake in your thinking, but your analysis seems kind of like you think you are taking a thing that is good in a general way, and applying it in an area where it isn't good.
You aren't taking the mistake seriously enough - you are downplaying it, when you should be emphasizing it.
I think this mistake has come up in other places. You said something in discord the other day:
> I ~never look at solns for coding or maths stuff.
I think that view is mistaken, and is related to the mistake I am talking about in this post. You value starting from scratch, figuring things out on your own, not building off of existing knowledge. I don't think that is the best way to learn.
I have been intensely focused on work the past 2+ weeks (and have had some major successes). That will continue for at least another 2 weeks, and probably much longer. That means I won't post here nearly as much. FYI.
Mb some progress on my learning conflict stuff
>> I ~never look at solns for coding or maths stuff.
> I think that view is mistaken, and is related to the mistake I am talking about in this post. You value starting from scratch, figuring things out on your own, not building off of existing knowledge. I don't think that is the best way to learn.
I think I just realised something.
I've changed my attitude a bit since mid-feb (in the right direction but not really going past any major breakpoints).
I was thinking about project planning, and I noticed that I don't do stuff like explicitly make trees for everything (I do for some things and particularly for hard things).
Mostly I do that in my head, and incompletely -- if I wrote the tree down and did proper brainstorming I'd have a better tree. For lots of stuff it doesn't matter *at the time*. Like I have enough *excess capacity* in my automatic-tree-making that it usually doesn't cause a project to fail. Partly that's b/c I can update it easily and usually there's nothing high stakes enough that missing it is really bad. However, have I *mastered* tree-making? Well, I just said that I haven't: "if I wrote the tree down and did proper brainstorming I'd have a better tree".
Why does this attitude seem okay at the time, but actually isn't? B/c, in effect, I am and have been compromising my self-judgement (when I move on too quickly and also don't keep improving those skills). I end up thinking I have more excess capacity (WRT e.g. planning skills) than I do, but how will I know that before I start a project? Certainly I do have *some* idea of whether I can complete a project, but I still make mistakes, and I'd have a much better estimate if I mastered relevant skills.
It's not that I'm immediately overreaching or doing it consistently, but I *am* consistently *at risk* of overreaching (to diff degrees in diff contexts).
I guess this isn't actually a direct reply to *trying to be helpful*, but I think it's related. I've thought a lot about sources of organization of knowledge recently, and there have been some ~minor changes in my choices b/c of it. One of the reasons this idea is related is that I think it implies that, b/c master is so important, the benefit of using well organized info is increased.
Rereading, the way I put the first two sentences is misleading. I said:
> I think I just realised something.
> I've changed my attitude a bit since mid-feb (in the right direction but not really going past any major breakpoints).
That makes it sound like the thing I just realized was something I did in February.
That's not the case; the second sentences was meant to be like bg context but I didn't fix the placement in editing. I meant to include it, but to find a good way to integrate it as bg context.
It's common that people can do stuff with their autopilots/intuitions/defaults without writing much down. Some people are even really good at some stuff (their expertise) that way. But it's limiting. It puts a cap on how advanced your knowledge can get. (It's not an exact cap. It's more like it creates diminishing returns on your learning, so going further takes more and more work.) Explicit analysis and error correction of the pretty-good stuff removes the cap on progress. It makes it easier to realize better can exist (people generally don't see what they're missing out on) and access it.
this is part of my ongoing work on my major idea-conflict about learning.
What's the return on learning?
Let's say that you devote 1/2 of your working life on self-improvement. Or that your employer lets you devote 1/2 your working hours to self-improvement in a business-focused direction.
How long does it take for you to hit Return On Investment (ROI)? If you're spending 1/2 your hours on learning, then the point you reach ROI is when you get more than twice as good at your job (or your hobby, or whatever).
Say that: if you are good at learning, you get 1% better at doing your job per day. I think that's actually achievable for context-specific things (i.e. right person + right ideas + right subject matter). Well, your productivity "principle" is 1, and the "compounding interest" that you get from learning is 0.01 and the period is `d`. So `(1 + 0.01)^d > 2` is our ROI point. That means `d > ln(2)/ln(1.01)` which means, approx `d > 69.7`. *70 days* is the ROI point.
Is that practical in general? probs not. What about an improvement of 0.1% per day? That leads to `d > 693.5`, so like ~2 years. I think the 'real', general, maximum rate of improvement is between 1% and 0.1% for most ppl WRT most things, i.e., most ppl can get twice as better at most things in less than 2 years. There's definitely a period of high getting-better growth for completely new subject matters, tho.
Is there going to be a major factor in someone's growth rate? Yes. It will be, provided they're willing and motivated to learn, *the organizational quality of the learning material*. Organization of materials covers *which things in what order with what other context?* That *must* be the major factor because, provided someone is willing and motivated, the order and quality of ideas that they're introduced to is directly related to their learning progress. How could it be otherwise? If order and quality were not decisive factors, then the order of learning materials wouldn't be that significant (contradicted by structural epistemology) and/or the quality of materials wouldn't be that significant (contradicted by error-rate and overreaching). Note that *organizational quality* necessarily includes both order and content.
Mb a good time to mention: there is some excess capacity in the order of learning material; that's good because it allows for chewing and higher-quality self-judgements about one's learning. There's also some excess capacity in the quality of learning material; it only needs to be good enough to meet major breakpoints in the quality of the student's understanding. The student can't learn ideas perfectly, but provided they avoid major structural issues w/ what they do learn, then the student will have enough excess capacity in what they have learned to do useful work. They can also improve their ideas later without the overhead of bad structure. Humility helps here -- if they think their knowledge has too much reach then that can inhibit future learning.
So, one's ROI on learning is heavily dependent on the organizational quality of learning material. Material that meets major breakpoints (i.e., doesn't introduce major structural issues) is worth seeking. Material that is well written and easy to understand is worth seeking.
**Seek organized learning material. Follow through.**
One of the benefits of doing philosophy is that the skills you develop help you to mix and refine multiple sources. If there is a simplistic but accessible source of info, and a high quality but badly-written source of info, then being able to consume both quickly and efficiently with minimal errors is *profitable*. Not every subject has easy-to-find material that's high quality in both regards, so this skill matters.
learning strats - addendum/errata
> most ppl can get twice as better at most things in less than 2 years.
That's provided they have the philosophical knowledge about learning to do so. That's the major constraint for most ppl. But I don't think those ideas are necessarily that hard to learn. I should have made this bit more clear, tho.
One reason I didn't say it as explicitly in the prev post is that I think this stuff is included in "the organizational quality of the learning material", like good learning material will also have good knowledge about learning that it teaches you beforehand or along the way.
That's a bit contradictory to what I say at the end tho: "One of the benefits of doing philosophy ...". The contradiction is b/c I say this helpful skill is helpful as an afterthought, but don't mention it when it's more important -- earlier in the text. It's a contradiction because that sort of bad oraganization is probs not good to have in a mini-essay about the importance of quality organization.
#20252 Btw anon, I appreciate you commenting and agree with you. I think you put this idea well and succinctly:
> Explicit analysis and error correction of the pretty-good stuff removes the cap on progress.
I guess that ppl think that, b/c they're good at something, whatever specific limits they have are like ~universal among all other comparable ppl. Like they're at a "hard cap" rather than a soft one. A problem that comes out of this is that a ubiquitous soft cap is difficult to tell apart from a hard cap -- if ppl don't have the philosophy skill to tell the diff then they're self-limiting.
#20312 i replied at http://curi.us/2417-learning-skills-is-non-linear
For anyone reading this after the fact: I replied to curi's post, and if there's more discussion specific to that topic it'll probably be under that thread.
#18750 Max, you should not be trying to write a philosophy book or a long series of articles. That's large overreaching. You should practice the things from our tutoring.
> This is a bit of a guess at a general method for doing FI.
You need to organize your life and form some good habits (or otherwise set up things you happily, easily do regularly). They should include practice, reading and writing (e.g. freewriting, notes, forum posts, outlines, summaries, attempts to clearly explain something).