[Previous] curi's Microblogging | Home | [Next] Less Wrong Banned Me

Max Microblogging

This is Max's discussion thread. Max has agreed to only post as "Max" in this thread.

Elliot Temple on September 13, 2020

Messages (14)

Overreaching, greatness, and ~meta-knowledge

Consider people who are *great* (like exceptional) at something in particular.

One of the things that makes them great is ~*meta-knowledge*, like knowledge about context regarding their *actions*.

I watched a bit of a recent Sea of Thieves WR speedrun - particularly the events during 7:25:00 -> 9:00:00 (it's like a 21hr run).

They lost like 1:20:00 from a choice to steal another crews loot b/c that crew chased them for a decent while.

A third ship joined in a bit, too.

Near the end of this chase (8:49:00) they spot another sloop (ship of 2 crew) and one guy jokes about taking this new ship's loot.

The two speedrunners have been talking about what to do at this point, and particularly risk/rewards tradeoffs for how to sell the loot.

The two guys are good enough to - ordinarily - take on another sloop no problem.

after all, they just fought off 2 other crews of sizes 4 and 3.

their choice not to go after the sloop (and the humor of the joke) is based in this like ~*meta-knowledge~ type stuff.

it doesn't matter how great you are at something, even the best ppl in the world know there are some challenges they won't win (or it's too much a risk), and the choose to back off. they're not OP just because they're the best in the world.

Generalising this means something like: the ~meta-knowledge is *at least* as important as the knowledge about how to do the skill well (which is more like technical knowledge). Or, at least it's that important at high levels.

Basically, this is like "don't overreach", or rather, if you do overreach, don't expect to *still* be great. the ability to pick challenges is part of the reason great people are great. sorta like flying *close*, but not *too close*, to the sun.

It also relates to knowing your limits, either when something is too big a task or when (and what) to learn before doing it.

This offers a bit more clarity for an ongoing conflict of mine - something to do with learning styles and methods. I intuitively think that 'exploratory' style learning (with a high(er) error rate) has benefits. and I mean it's not as bad as doing nothing at all (I guess it could be sometimes), but it's not as efficient as directed and non-overreaching learning.

I think part of the reason I have this conflict is in essence thinking too much of my own skills. That's true even tho, I went through a few ~breakpoints early on in the *Tutoring Max* series. (Breakpoints might not be the right word, but I think there are like significant points of increased ~reach when we adopt new and better ideas about ourselves)

Max at 1:30 AM on September 15, 2020 | #18021 | reply | quote

Mini post.m on post formatting

curi.us doesn't format exactly like markdown does. with markdown a newline between consecutive sentences doesn't make a new paragraph, but it does here.

I wrote the above in vscode (posted also on my site in a new category), so wrote it like normal markdown - using linebreaks to make sentences clearer, easier to read/write while editing, etc. (How the paragraphs are meant to look)

The solution is I'll need to check for that beforehand. I could write a short script to strip linebreaks between consecutive sentences, but not sure that's worth it.

Max at 1:34 AM on September 15, 2020 | #18022 | reply | quote

Inefficient learning is like eating the seed corn

> an ongoing conflict of mine - something to do with learning styles and methods.

Sometimes I prioritize the wrong thing. I'll spend time on fun (and maybe even slightly useful) 'intellectual' activities like coding, instead of doing more structured, efficient, and goal directed learning. That's like eating the seed corn.

It's like: I end up fed, and I still have some seed corn left over, but the harvest isn't going to be as good. What's the point of learning and thinking if not for the harvest?

indirectly related: #18025 and https://curi.us/2378-eliezer-yudkowsky-is-a-fraud

Max at 2:22 PM on September 15, 2020 | #18032 | reply | quote

#18032 Metaphorically seed corn = capital = e.g. machines. it's stuff you can save for later. time is somewhat different in that you can't save it for later. but, like money, you can spend time on things with later benefits (investment rather than consumption).

curi at 2:29 PM on September 15, 2020 | #18034 | reply | quote


> time is somewhat different in that you can't save it for later. but, like money, you can spend time on things with later benefits (investment rather than consumption).

I'm not sure if we disagree on something or not. I think we roughly agree but I'm thinking of time spent in a specific way (just a subset of the time we get). For context, I read curi.us/2378 a few minutes before having that idea. I liked these bits particularly (and liked being reminded of them)

> capital goods, not consumption goods

> accumulation of capital increasing the productivity of labor

I think time can be sometimes seen like money and other capital goods. How do people save money? One option is a bank account, but that performs poorly, and is sort of like investing in loans/debt, anyway. Better investors save money by spending it *on capital goods* (and they choose the goods). After they spend money, they don't have it any more, but they have something else they can exchange for money later.

I think *time spent on learning* is similar, but not all time spent is similar. Granted, there's no bank account for time, but you can spend time now so you get more of it later -- that's one of the reasons to learn and think, you - in essence - get more time in the future because you avoid making mistakes or being slower than you could be. In that sense it's like investing in productive capacity. There's a higher upfront cost, but you get a higher capacity and larger RoI than the alternatives. The choice to spend time learning ineffectively seems to me like spending some chunk of your factory budget on hookers and cocaine; fun at the time, but it's in opposition to the main goal.

Similarly, by analogy, learning skills that don't end up helping you, but learning them effectively, is like market risk. Not every investment makes a profit, but diversification helps, and the better you are the less you waste.

Time spent on things like downtime is different from normal money; that's more like $100 of food stamps you both get once a week and have to spend the same week. You might only be able to spend it at low-quality grocers, but avoiding spending it only hurts you.

An alternative thing about spending time on learning is trying to spend downtime doing pseudo-learning stuff. That's more like trying to invest your $100 food stamp (not going to go well). I find trying to do ~learning stuff when I'm tired etc. often means I stay up later, sleep worse, and have less high-capacity time for important things.

Max at 2:53 PM on September 15, 2020 | #18036 | reply | quote

Eating seed corn is like disassembling machinery for scrap metal, which is different (more destructive) than leaving it idle for a day (which sounds reasonably similar to spending a day of your time unproductively).

curi at 2:56 PM on September 15, 2020 | #18037 | reply | quote

#18037 yeah okay, I see what you mean. I've changed my mind on the quality of my analogy. (I don't think it's super bad or anything, just not as good as I originally thought.)

Max at 3:23 PM on September 15, 2020 | #18038 | reply | quote

Perimortem on intuitive response to #18037

My intuitive response (which would be put a bit defensively) is something like: disassembling machinery is like eating *all* the seed corn, and leaving it idle is like skimming a bit of corn off the top. Things keep working; there's still productivity and returns, but less than otherwise.

(note: I think this is valid, and it's why I don't think my analogy was all bad)

I think that intuitive response is wrong though. It's subtly moving the goal posts (similar to e.g. a "strategic" clarification), and would be expressing an idea like: "we're both right, we should blame miscommunication". That'd be dishonest though, because:

a) I didn't see some limits of the analogy that I do now - this contradicts the idea of miscommunication being a primary issue (it's not important if curi and I understood each other fully in every way; we understood each other sufficiently), and

b) the reasonable next steps from a miscommunication would be to figure out how to avoid it. Some miscommunications are due to like ~inferential distance but that doesn't make sense here. The easiest solution (if it really was miscommunication) would have been for me to be clearer originally. If I advocated that (and claimed I could have done it) I'd be pretending like there wasn't ever an issue; at the very least my lack of clarity would be an issue. Maybe I couldn't have been clearer for lack of knowledge, in which case it'd still be dishonest--and evasive--to claim a miscommunication b/c that wasn't the problem.

I don't know any way that my intuitive response would have been good, which is the reason I wrote this perimortem.

I'm not sure if putting the response in this perimortem is like a roundabout (and/or cowardly) way of trying to say the idea anyway. However, I think writing the perimortem is a better alternative than making the titular reply, so I'm satisfied for now.

Max at 3:23 PM on September 15, 2020 | #18039 | reply | quote

> I intuitively think that 'exploratory' style learning (with a high(er) error rate) has benefits.

Whether something is an error depends on your goal. If your goal is to get it correct, exploring works badly. If your goal is e.g. to get a rough overview, exploring works well.

curi at 9:42 PM on September 15, 2020 | #18044 | reply | quote

Max's postmortem on #18030 #18043 #18050

## Max's postmortem on #18030 #18043 #18050

IR wrote (addressing curi)

> i feel very much like i have gotten some of these ideas from you, but i dont know which things youve wrote that i got these ideas from. and i dont know how much ive changed them.

I asked IR:

> Otherwise, does it matter how much you've changed your mind?

which didn't make much sense. Context: #18030 #18043 #18050

I think 2 main things happened:

1. I wasn't careful when reading IR's comment, so missed important details / relations. (i.e. he was talking about changes to curi's ideas in his head, not changes to his own pre-existing ideas in his head)

2. I've been thinking recently about how my own ideas have changed over the last ~3 months.

(1) allowed me to ~*skip between trains of thought* without noticing. I ended up thinking about IR's comment in terms of (2). My question to IR makes more sense in this light.

Beyond the issue of miscommunication in general, there's a bigger problem I should care about and deal with. That is: responding earnestly to someone (usually) takes longer than reading what came immediately before. If I spend time responding to what I *thought* they wrote (but I'm wrong about that) then it's, in essence, wasted time. Maybe there are some benefits, but they're lesser than would be otherwise.

To avoid this sort of thing the obvious answer is reading stuff better. That doesn't feel super actionable tho b/c just concentrating more on ~*everything* I read is not v efficient, esp if this sort of issue isn't super common.

I could try re-interpreting what the person says, like re-writing out what I thought they meant before replying, but how would I know if that were right/wrong? It might make it clearer to me if I was *unclear* about what they thought. It doesn't help if I think I know what they meant and that idea is clear and consistent in my mind (as it was in this case).

This issue was - I think - that the reference "these ideas" is somewhat ambiguous (or maybe just tricky). I think IR's full sentence (expanding "them") is something like:

> and i dont know how much ive changed [my version of ideas I got from your ideas relative to the original ideas you wrote]

So, this might be a better sketch of what to do:

- recognise tricky references (ideally automatically)

- when tricky references occur, expand them out (there could be more than one possibility)

- criticise the possibilities so I get just one

- if I can't and it's ambiguous still, ask a clarifying question (listing the possibilities too)

- optionally respond to each possibility if short enough or easy enough

- if I get one and it's reasonable I can just respond

- if I get one and I'm not sure it's reasonable, ask a clarifying question and respond at the same time

the next step in this action-plan-sketch is "recognise tricky references (ideally automatically)". **The first part of that is introducing a breakpoint (in the coding sense) on tricky references.** I can do this a bit by paying more attention to references in general, trying to quickly figure out what they mean (and eval-ing if I know what they mean), and taking action if I don't. If I'm not 10/10 confident on the reference I should stop and investigate.

Okay, this feels like a decent PM and plan. Feedback welcome/appreciated. It was a bit trickier than normal to figure out what to do because a plan like 'learn2read' didn't feel good enough.

Max at 7:49 PM on September 16, 2020 | #18051 | reply | quote

>> I intuitively think that 'exploratory' style learning (with a high(er) error rate) has benefits.

> Whether something is an error depends on your goal. If your goal is to get it correct, exploring works badly. If your goal is e.g. to get a rough overview, exploring works well.

I hadn't considered this. It makes sense. That said, I don't think it's what I had in mind.

The italicised bits of this example are a bit of an outline.

An example is the route-finding-app I made for my SSOL speedrun: *I spent way too long* trying to get the PNG of the map as a background image behind the lines and points that get drawn. *Eventually I managed it* (after lots of different attempts and integrating bits of code I found online). *The main difficulty* was that the original author of the (simple) travelling salesman program used Haskell's GLUT library which is basically a *lowish-level* OpenGL lib (and *I'm not familiar* with low level opengl stuff). There are higher-level ones that make this stuff easy. *I only really cared about the outcome but it took way longer than I wanted it to.*

I didn't read a manual or in depth tutorial, instead tried to fumble my way though. That is sometimes faster. But you can't tell stuff like 'how long is left till I finish?' and other basic questions.

In some ways my process involved exploring as you describe. I toyed with the idea of switching to a higher level library, looked for higher level stuff that exposed/integrated with the lower level stuff (no luck), and read bits from the middles of some advanced/in-depth tutorials.

But, crucially, the exploring was a side-effect of a particular problem with the other bits. I'd say my choice of method when trying to get the PNG to draw on-screen was exploratory learning, so it's different to exploration as you describe (though somewhat related).

Eventually I found some code someone had written that was close enough to what I needed to make it work. There was a weird interaction with other code I'd written tho (involving drawing text), that meant the first line of text was the right size but all the other lines didn't appear on screen. I managed to fix that but it took another like 30 min of experimentation.

A better method - in hindsight - would have been to just do a tutorial for Gloss (an alternative opengl-based library, but much higher level) and recode what I'd already written, and the opengl bits that came with the app originally. I could have gone through enough of a tutorial on Gloss given the amount of time I spent (like 5hrs+).

I did learn other stuff during that time, but I didn't feel like the time was particularly well spent. I don't expect to use OpenGL + Haskell much in future, so it's not like this is particularly useful outside this one thing I wanted to do.

In some ways I do this stuff for the challenge, like thinking "I should be able to do this, so I will", but I don't think "I should be able to do this, eventually, but should I bother, or should I look for a different way to do the same outcome?"

Max at 8:10 PM on September 16, 2020 | #18052 | reply | quote

TCS and passions

I was thinking about a TCS issue yesterday. I have half a soln. It's about a child's passions

There's a possibly coercive idea I have that I think is the *common-er* version of the problem (maybe), then there's a more general version.

the possibly coercive version is like:

> I want my child to have a passion for maths (coercive), or

> I value passion about maths in general, and I want my child to be able to develop that if they want -- I don't want to *hinder* them (coercive?)

The second formulation feels like it could be done okay--without coercion--but I don't know enough to tell for sure.

I was thinking about this in the context of **a parent who's bad at maths**.

This made me think of a possible common issue *most* ppl would run in to if they tried TCS: *their skills/passions are inadequate (not broad enough and general enough) to avoid hindering the child.*

I think not being perfect is okay, but if we can avoid significant hindrance that's good.

One situation is if the child develops a passion for X but the parent isn't good/passionate about it, they can still buy equipment/supplies or hire tutoring or find a friend who's passionate, etc. This is the 1/2 solution I mentioned.

But more broadly, how do you facilitate the *development* of a passion before it's manifested?

One thing I was thinking about is when ppl have been passionate about something and sparked something in me. A good example is Haskell and type-safe programming; a guy at a technical meetup sold me on Haskell over a beer. It took me *years* before I actually used it in production, but I was sold in 20 min.

So exposing a child to a wide range of *passionate* people--who are probs the higher-value ppl to expose children to, anyway--is maybe one way, though that could be done corrosively. If you happen to be friends with passionate ppl and the visit and talk to your child, that feels different than like *engineering situations* to trick your child or something.

I haven't looked through the archive to see what other ppl have said on the topic, yet.

Max at 12:57 AM on September 19, 2020 | #18066 | reply | quote

correction s/corrosively/coercively


Max at 12:59 AM on September 19, 2020 | #18067 | reply | quote

Quick thought on a secondary goal of life.

I think a good secondary goal for ones life--or maybe another primary goal as yesno supports--is to live without control. By that I mean: live so that you are both happy with your decisions consistently, and also make those decisions without willpower or self-control. All choices are somewhat like using self-control and some moral code; except that you have no animosity against those choices; they are choices you'd always make anyway. It's sort of like having no friction.

Ofc there will always be conflicts and problems to solve, but this state is like the closest you can get to that *and sustain*.

Max at 9:00 PM on September 19, 2020 | #18080 | reply | quote

(This is an unmoderated discussion forum. Discussion info.)