Example of Rejecting TOC Improvements

(I also posted this on Less Wrong.)

Below I quote from Process of On Going Improvement forum, letter 6. Eli Goldratt shares a letter he received. I added a few notes to help people follow acronyms.

My question is: Does anyone know of any applications of Less Wrong philosophy to a situation like this? How can LW ideas about rationality explain or fix this sort of problem? The scenario is that someone tried to use rational thinking to make business improvements, was highly successful (which was measured and isn't in dispute), but nevertheless has met so much ongoing resistance that he's at the point of giving up.

I am no expert in TOC but I believe my recent experiences have impact as to what you are writing about.

TOC = Theory Of Constraints. Summary.

About 2 years ago I started on my TOC adventure. Read everything I could get a hold off etc. Tried to get the company interested, etc. In fact, I finally got them interested enough that we had multiple locations participate in the satellite sessions and had enough for three facilitators (myself included). However, I could never get the company to spend for training at AGI. So, in the old air cav fashion, I felt it was up to me to make it happen.

Last year we had real problems with cost, service, high inventory, etc. My plant, I am the plant manager, was being analyzed for a possible shutdown or sell off. We were asking for 17 machines at about $300,000 each due to "lack of capacity" and we were being supplemented by outside producers.

Again, I am not a TOC expert. Basically my exposure has been reading and researching and building computer simulations to understand. But I put on TOC classes for all of my associates (200). I spent 8 hours with each of these associates in multiple classes. We talked about the goal, TIOE, we played the dice game (push, KanBan, DBR) with poker chips, paper clips, and different variations of multiple sided dice and talked about its impact, etc.

The Goal (summary) is a book by Eli Goldratt that has sold over 6 million copies.

TIOE = Throughput, Inventory and Operating Expense. These are the measurements Goldratt recommends.

The dice game is explained in The Goal. It's also now taught by e.g. MIT (section 3-2).

DBR = drum buffer rope. It's about coordinating activities around the bottleneck/constraint.

Last summer we started development on DBR and a new distribution strategy based on what I have read and researched on TOC. I used Bill Dettmer‘s book to develop trees and the clouds. I check our plan against some presentations last November in Memphis when we attended the TOC symposium there.

We had many in the company who doubted but we stuck our necks out and started at the beginning of this year. And we knew we would not be perfect.

YTD results:

YTD = Year to date

Achieved Company President‘s Award for Safety (First Plant to do so) and the planning was based on things I had read about TOC and techniques on establishing teamwork.
Service is up from high 80 to low 90 percentile to averaging above 98.5%
Costs are under budget for the first time in some years
Total Inventory has decreased over 30% and is still dropping
No Longer being supplemented by outside companies for our production
No longer need additional machines to supply demand
We do need additional business to fill our machines
Plant is no longer being considered for close, in fact production from other
facilities are being transferred in.

The chief concern when we told the big wigs we were going to this, was that the cost of freight would go up because our transfer batch sizes would get small. I told them correct but we would stop shipping product back and forth between distribution centers and repacking of product would be almost non-existant. YTD: Our total freight dollars spent is 10% less than the previous year but they look at $/ lb of freight which has gone up. I know this is wrong, they state they know it is wrong, but it still gets measured and used for evaluations.

Anyway, as we shipped more often but smaller quantities our distribution centers complained that we were costing them too much. I have tried for 9 months to get them to quantify this to me. "If I increase batch size of the transfer how many people will it reduce or how much overtime will it reduce" or any other real incremental cost will it get rid off? The general response is, it is hard to quantify but we know it is there. Maybe their intuition is correct, but maybe it is not.

So finally, I am at my end. The DCs continue to insist that we are driving their costs up with small transfer batch sizes. They have complained greatly to my boss and my bosses boss. I am growing weary of the continual fight, which has cost me and my family so much time and effort. I have chosen togive up. I have grown tired of the comments, "Well it was said in a meeting that the concept did not deliver what we expected." Then I show them the numbers and ask, "What else was expected." The reply, "That is what I heard at the meeting."

DCs = distribution centers.

Maybe I made a mistake trying to bring TOC to my plant myself. I would have loved to hire a consultant who really knew what they were doing, but any mention of that brought long talks about cost, etc. I hate to give up but my frustration level has impacted my family, which is something I cannot let happen.

In the end, I have decided this week to give them their large transfer batch sizes while I begin to look for somewhere else to go.

I did not mean for this to be a bitch session. But I can not believe the sheer level of frustration on trying to achieve buy in, even when:
1) Prior to going to our concept we had meetings with our leadership where I presented the UDES from the previous year, and all agreed,

UDES = UnDesirable EffectS. He's saying that before starting he discussed what problems the company was facing with leadership and got unanimous agreement.

2) Showed our potential solution, not all agreed but they were willing to try it
3) Now showing the best numbers the plant has ever turned out.
I just cannot understand the skepticism.

What insight can LW bring to this problem of negative response to rational improvement?


Elliot Temple | Permalink | Messages (0)

Principles Behind Bottlenecks

(I also posted this at Less Wrong.)

This post follows my Chains, Bottlenecks and Optimization (which has the followups Bottleneck Examples and Comment Replies for Chains, Bottlenecks and Optimization). This post expands on how to think about bottlenecks.


There are deeper concepts behind bottlenecks (aka constraints, limiting factors, or key factors).

First, different factors contribute different amounts to goal success. Second, there’s major variation in the amounts contributed.

E.g. I’m adding new features to my software. My goal is profit. Some new features will contribute way more to my profit than others There are lots of features my (potential) customers don’t care about. There are a few features that tons of customers would pay a bunch for.

A bottleneck is basically just a new feature that matters several orders of magnitude more than most others. So most features are approximately irrelevant if the bottleneck isn’t improved.

Put another way: improving the bottleneck translates fairly directly to more goal success, while improving non-bottlenecks translates poorly, e.g. only at 1/1000th effectiveness, or sometimes 0. (It’s possible, but I think uncommon, to have many factors that contribute similarly effectively to goal success. Designing stuff that way doesn’t work well. It’s the same issue as balanced production lines being bad, which Eli Goldratt explains in The Goal: A Process of Ongoing Improvement. It’s also similar to the Pareto Principle which says 80% of effects come from 20% of the causes – meaning most factors aren’t very important.)

What about the software not crashing, not corrupting saved data, and not phoning home with location tracking data? People want those things but I could have them and easily still make zero profit.

A good, typical model for viewing goal pursuit is:

  1. There are many factors that would help, and just one or a few of them are the most important to focus on. This is because most factors have a significantly smaller impact. This is focusing on the key positives.
  2. There are also many dealbreaker factors that cause failure if screwed up. This is avoiding major negatives.

People care about (1) conditional on (2) not being broken. Avoid anything awful, then optimize in the right places.

When buying a cat, I might try to optimize cuteness and cheapness, while also making sure the cat has 4 paws, a tail, no rabies, is tame, and isn’t too old. I want to do well on a couple key factors and also a bunch of easy factors need to be non-broken. It’s generally not that hard to brainstorm dozens of dealbreakers, many of which are quite easy to avoid in your current situation, even to the point of sounding silly to mention it at all.

(Dealbreakers are also contextual. If there were no cats available meeting all my criteria, I might lower my standards.)

The type (2) factors don’t require much attention. If a factor did need attention, it’d switch categories. (2) is just for failure conditions which are pretty easy to handle. This means most of our attention is available to focus on a few key issues.

I think this model is more effective than e.g. something like “consider all the factors; find out it’s way too complicated; try to approximate what you’d do if you had enough attention for all the factors”.

The model I’m proposing can be thought of as a method of organized, effective approximation from a more complex “take everything fully into account” approach. It tells us how to approximate. Thought of another way, I’m saying don’t distribute your significant figures equally.

You might think “Why not just weight all the factors relevant to my goal, then distribute my attention and significant figures according to the weightings?” The difficulty with that is how to weight things. Having a cat that doesn’t attack me and give me rabies is really important. If I’m just weighting factors normally, I’ll give that high weight because I want to reject any cat purchase which fails at that issue.

So if you just start assigning weights straightforwardly, you’ll give the type (2) factors high weights, e.g. 50,000 each, and if they all pass then the type (1) factors will function as tiebreakers worth e.g. 1-50 points each (minor detail: you can scale the weights so they add up to 1, but it’s easier to do that after you have all the factors with weights assigned – I don’t know what fraction of 1 a big factor should be until I know how many big factors there are). But the high value type (1) factors are actually the best place to put a bunch of significant figures. We don’t need a bunch of precision to address our cat having a tail, 4 legs, and no rabies. So attention allocation shouldn’t correspond to weighting.

In general, when we pursue a goal, there are many important but easy factors, and a few important but hard factors. For goals which are achievable but not easy, it has to be this way. If there were dozens of hard factors, that basically means we’re not ready to do it (though with a huge budget and a big team, sometimes it can be done – that lets you have specialists each working on just one hard factor each, plus some additional people figuring out how to coordinate and combine the results). But the standard progression is: if a project has 10 hard factors, that’s too many for me to focus my attention on at once, so I need to work on some easier sub-projects first – e.g. learning about some of those issues in isolation or doing smaller projects that help build up to the bigger one.

Another way to view the difference is that an increase in the key factors increases our success at the goal. E.g. adding the right new feature will increase profit. Or getting a cuter cat will increase enjoyment. Loosely, the more the better (there’s sometimes an upper limit, at which point it stops helping or is even actively harmful, or it keeps helping but now some other factors matter more). But for type (2) factors, the attitude isn’t anything like “more is better”; it’s just “don’t screw it up”.

In this analysis, I’ve basically assumed that type (1) factors and goal success come in matters of degree (can have more or less of them), but type (2) factors have a binary, pass/fail evaluation. The analysis needs extending for how to deal with binary goals, binary type 1 factors, and matters of degree for type (2). Those issues will come up and we need some way to think about them. I’ll leave that extension for a future post.

That extension is part of a broader issue of how binary and degree issues come up in life, how to think about them, how to convert from one to the other (and when that’s possible or not), when one type is preferable to the other, and so on. They’re both important tools to know how to think about and work with.

Factory Example

Now let’s go through an extended example to clarify how some of these issues work.

In my factory, I’m combining foos and bars to make foobars, which I sell. I have more bars than foos. So foos are the bottleneck. Getting even more bars won’t result in producing additional foobars. I already have an excess capacity of bars.

I also have excess capacity for assembly and QA. My current work area and team could produce many more foobars without hiring new people, getting more space, or getting new tools. And they could already check more foobars for defects.

And I have excess capacity in the market: I could sell more foobars if only I could produce them.

I also have excess capacity on foobar quality. I could redesign them to be nicer, but they’re good enough. Customers are satisfied. They do the job.

And I have excess capacity on price. Cheaper would be nicer, sure, but there isn’t much competition and my customers are people with a good reason to get a foobar. They get benefits from the foobar which are well above the price I’m charging.

Excess capacity means non-bottleneck.

Supply of foos is the bottleneck and the other issues are non-bottlenecks.

Using bars is limited by the availability of foos. That’s a traditional, standard bottleneck.

I call niceness a non-bottleneck because, as with foos, there is excess capacity. It won’t make much difference to achieving more of my goal (profit via foobar sales).

Key factor and secondary factor may be better terminology. It has some advantages, mostly because 1) foo supply isn’t blocking niceness from mattering in the way it’s blocking more supply of bars from mattering 2) niceness would help a little (a few orders of magnitude less than getting more foos, but not zero), which contrasts with bars – getting more bars wouldn’t help at all (in current circumstances).

Bottlenecks can be changed. E.g. I find a new supplier who can deliver far more foos than I need. Foos are no longer a bottleneck. Now what’s the bottleneck?What limits my profit? Maybe I’ll start running out of bars now. Maybe I won’t have enough customers and I’ll need better marketing. Maybe I’ll need to hire more workers. Maybe price will become the crucial issue: if I could lower the price, it’d get me a million new customers. Maybe price is key to breaking into the hobbyist market whereas price isn’t so important for the business market I currently serve.

To break into the hobbyist market, I might need to expand production capacity and lower the price and do a new marketing campaign. There could be several key factors. Doing three things at once is realistic (though not ideal), but we can’t split our focus too much. It’d be nice to find a way to improve things more incrementally. Maybe I’ll figure out how to produce foobars more cheaply first while leaving my price the same, and I’ll get some immediate benefit from higher profit margins. Then once I have the price low enough I’ll try to start selling some to hobbyists as a test (sell in some small stores instead of the big chains, or I could try online sales), and only if that works will I try to ramp up production and hobbyist marketing together.

I can also view the new project (selling to hobbyists, via expanding production, producing more cheaply, and a new marketing campaign) as a whole and then look at what the bottleneck(s) and excess capacity are. They might be quite unequal between the different parts of the new project.

(This is just a toy example. I didn’t worry about new distribution for hobbyists nor about designing a different version of the product for them which better meets the needs of a different market, nor did I worry about market segmentation and how to maintain my higher prices for business customers (a separate production version is one way to do that, using different regions is another, e.g. I could do my hobbyist sales in a different country than my existing business sales.))

Category (1) above (key positives) is the bottlenecks, the things that are valuable to pay attention to and optimize. Category (2) above (avoiding major negatives) is the non-bottlenecks, the things with excess capacity, which I can view as either “good enough” or “failure”. Relevant non-bottlenecks are important. I can’t just ignore them. They need to work. They’re in a position to potentially cause failure. But I’m not very worried about getting them to work and I don’t need to optimize them.


Elliot Temple | Permalink | Message (1)

Less Wrong Comment Replies for Chains, Bottlenecks and Optimization

Read this post, with replies, on Less Wrong.


Replies to comments on my Chains, Bottlenecks and Optimization:

abramdemski and Hypothesis Generation

Following the venerated method of multiple working hypotheses, then, we are well-advised to come up with as many hypotheses as we can to explain the data.

I think come up with as many hypotheses as we can is intended within the context of some background knowledge (some of which you and I don’t share). There are infinitely many hypotheses that we could come up with. We’d die of old age while brainstorming about just one issue that way. We must consider which hypotheses to consider. I think you have background knowledge filtering out most hypotheses.

Rather than consider as many ideas as we can, we have to focus our limited attention. I propose that this is a major epistemological problem meriting attention and discussion, and that thinking about bottlenecks and excess capacity can help with focusing.

If you’ve already thought through this issue, would you please link to or state your preferred focusing criteria or methodology?

I did check your link (in the quote above) to see if it answered my question. Instead I read:

Now we've got it: we see the need to enumerate every hypothesis we can in order to test even one hypothesis properly. […]

It's like... optimizing is always about evaluating more and more alternatives so that you can find better and better things.

Maybe we have a major disagreement here?

abramdemski and Disjunction

The way you are reasoning about systems of interconnected ideas is conjunctive: every individual thing needs to be true. But some things are disjunctive: some one thing needs to be true. […]

A conjunction of a number of statements is -- at most -- as strong as its weakest element, as you suggest. However, a disjunction of a number of statements is -- at worst-- as strong as its strongest element.

Yes, introducing optional parts to a system (they can fail, but it still succeeds overall) adds complexity to the analysis. I think we can, should and generally do limit their use.

(BTW, disjunction is conjunction with some inversions thrown in, not something fundamentally different.)

Consider a case where we need to combine 3 components to reach our goal and they all have to work. That’s:

A & B & C -> G

And we can calculate whether it works with multiplication: ABC.

What if there are two other ways to accomplish the same sub-goal that C accomplishes? Then we have:

A & B & (C | D | E ) -> G

Using a binary pass/fail model, what’s the result for G? It passes if A, B and at least one of {C, D, E} pass.

What about using a probability model? Problematically assuming independent probabilities, then G is:

AB(1 - (1-C)(1-D)(1-E)))

Or more conveniently:

AB!(!C!D!E)

Or a different way to conceptualize it:

AB(C + D(1 - C) + E(1 - C - D(1 - C)))

Or simplified in a different way:

ABC + ABD + ABE - ABCD - ABCE - ABDE + ABCDE

None of this analysis stops e.g. B from being the bottleneck. It does give some indication of greater complexity that comes from using disjunctions.

There are infinitely many hypotheses available to generate about how to accomplish the same sub-goal that C accomplishes. Should we or together all of them and infinitely increase complexity, or should we focus our attention on a few key areas? This gets into the same issue as the previous section about which hypotheses merit attention.

Donald Hobson and Disjunction

Disjunctive arguments are stronger than the strongest link.

On the other hand, [conjunctive] arguments are weaker than the weakest link.

I don’t think this is problematic for my claims regarding looking at bottlenecks and excess capacity to help us focus our attention where it’ll do the most good.

You can imagine a chain with backup links that can only replace a particular link. So e.g. link1 has 3 backups: if it fails, it’ll be instantly replaced with one of its backups, until they run out. Link2 doesn’t have any backups. Link3 has 8 backups. Backups are disjunctions.

Then we can consider the weakest link_and_backups group and focus our attention there. And we’ll often find it isn’t close: we’re very unevenly concerned about the different groups failing. This unevenness is important for designing systems in the first place (don’t try to design a balanced chain; those are bad) and for focusing our attention.

Structures can also be considerably more complicated than this expanded chain model, but I don’t see that that should change my conclusions.

Dagon and Feasibility

I think I've given away over 20 copies of _The Goal_ by Goldratt, and recommended it to coworkers hundreds of times.

The limit is on feasibility of mapping to most real-world situations, and complexity of calculation to determine how big a bottleneck in what conditions something is.

Optimizing software by finding bottlenecks is a counter example to this feasibility claim. We do that successfully, routinely.

Since you’re a Goldratt fan too, I’ll quote a little of what he said about whether the world is too complex to deal with using his methods. From The Choice:

"Inherent Simplicity. In a nutshell, it is at the foundation of all modern science as put by Newton: 'Natura valde simplex est et sibi consona.' And, in understandable language, it means, 'nature is exceedingly simple and harmonious with itself.'"

"What Newton tells us is that […] the system converges; common causes appear as we dive down. If we dive deep enough we'll find that there are very few elements at the base—the root causes—which through cause-and-effect connections are governing the whole system. The result of systematically applying the question "why" is not enormous complexity, but rather wonderful simplicity. Newton had the intuition and the conviction to make the leap of faith that convergence happens, not just for the section of nature he examined in depth, but for any section of nature. Reality is built in wonderful simplicity."


Elliot Temple | Permalink | Messages (3)

Bottleneck Examples

View discussion of this post at Less Wrong.


This post follows my Chains, Bottlenecks and Optimization. The goal is to give hypothetical examples of bottlenecks and non-bottlenecks (things with excess capacity), and to answer johnswentworth, who helpfully commented:

I really like what this post is trying to do. The idea is a valuable one. But this explanation could use some work - not just because inferential distances are large, but because the presentation itself is too abstract to clearly communicate the intended point. In particular, I'd strongly recommend walking through at least 2-3 concrete examples of bottlenecks in ideas.

I’ll give a variety of examples starting with simpler ones. If you want a different type, let me know.

Note: The term “bottleneck” has synonyms like “constraint” or “limiting factor”. I’ll often use “key factor”. This contrasts with a non-bottleneck, or secondary factor, which is something with excess capacity (above a margin of error), so improving it isn’t very useful. Doing better at a bottleneck makes a significant difference to doing better at your goal; doing better at a non-bottleneck doesn’t. My basic point is that we should focus our attention on key factors.

Oven

In The Goal by Eli Goldratt, the main example is a factory. One of the bottlenecks is the heat treat oven: the rate of baking parts in the oven was limiting the overall output of the factory.

A non-bottleneck example is quality assurance. It was possible to check parts for defects significantly faster than they came out of the oven. So hiring more QA people wouldn’t result in more finished products.

One of the main points of Goldratt’s book is that trying to have a balanced production line (no excess capacity at any workstation) is a bad idea.

Software

Focusing on key factors or bottlenecks is well known in software: To speed up a program, measure where most of the run time is being spent, then speed up that part(s). Don’t just optimize any function. Most functions have excess capacity (they are more than fast enough already to get a satisfactory result, and their impact is orders of magnitude less than the bottleneck’s impact).

Chair

I weigh 150lbs and buy an office chair that can hold someone up to 300lbs. It has excess capacity. Would a chair that can hold 400lbs be 33% better (regarding this factor)? Nope, that wouldn’t be useful to me. Everything else being equal, a sturdier chair is better, but I should focus my attention elsewhere: other factors are going to matter orders of magnitude more than having more excess capacity on sturdiness.

I have a budget. Price is a key factor. If a buy a cheaper chair, I can buy more Fortnite skins. So when I’m chair shopping, I focus on variations in price, but I don’t pay attention to variations in weight capacity. (Every chair in the store holds my weight plus a margin of error, so it’s a non-issue.)

Another non-bottleneck is smoothness. I want a chair that doesn’t poke me. Every chair in the store is far more than smooth enough. If I measured the bumps, I’d fine one chair has 50 micrometers bumps, another has 100 micrometer bumps, and so on, but it’d take 4000 micrometer bumps to poke me uncomfortably. I shouldn’t assign a higher score to the chair with smaller bumps when both have plenty small enough bumps. And there’s so much excess capacity here that I don’t need to and shouldn’t even do those measurements – that’d be wasteful.

Ideas

These examples involve ideas. E.g. “I’ll buy the Aeron chair” is an idea about how to proceed in a life situation. It has excess capacity on chair smoothness and sturdiness. It unfortunately fails horribly on the price bottleneck.

Factories are designed according to ideas. Someone’s design plan (or someone’s ideas about how to modify the factory) created that bottleneck at the oven.

Computer code corresponds to ideas that programmers have about what steps should be used to accomplish tasks. A programmer’s idea about how to design a program can have a speed bottleneck for one sub-idea and excess speed capacity for many other sub ideas.

“I should go to Stanford” is an idea with excess capacity on distance because it’s more than far enough away from my parents. It also does great on the prestige key factor.

Another type of idea is a skill. E.g. I have ideas about how to play chess. They have excess capacity for the goal of beating a 1000 rated player – they are more than good enough to do the job. For the goal of getting a higher rating, my endgame knowledge is a bottleneck, but my opening knowledge has excess capacity. The positions I get out of the opening are more than good enough to move up in the chess world, but I lose too many drawn endgames.

When constructing a birdhouse, I have excess capacity for reading and understanding a guide, but a bottleneck for patience to go slowly and carefully enough given my poor skill at making wood come out the right shape. The wood has excess capacity for strength, but not for weight because I want to hang the birdhouse from a thin branch.

Evolution

We’re debating selfish gene, group selection or Lamarckism as the primary driver of biological evolution. The key factors involve causal explanations.

Lamarckism lacks specifics about the mechanism for transmitting change to the next generation (it’s also experimentally questionable). Sure you can hypothetically imagine a system which saves information about bodily system usage during a lifetime and then puts information into eggs or sperm. But that system hasn’t been found in reality, studied, observed under a microscope, etc. Genes have been, e.g. we’ve studied the shape, chemical composition and copying mechanisms of DNA.

A key issue with group selection was what happens with traits which help the group but harm the individual. What are the causal mechanisms by which those traits would end up in the next generation at higher rather than lower rates (lower due to the harm to the holders of the trait)? No good answer is known for the general case.

These theories all have excess capacity at being able to tell a high level story to account for the animals we observe. Their ability to do that could survive infinitely many variations of the animals to be explained (e.g. if giraffes were 1.1 inches taller on average, or 1.11, or 1.111…). They could also still tell their stories successfully given an infinity of additional constraints, e.g. that the story doesn’t use the number 888111, or the constraint it doesn’t use 888112, or a constraint on 888113, etc.

It’d be an error to pick some evidence, e.g. observations of spiders, and then try to estimate how well each theory fits the evidence, and assign them differing scores. Each theory, if it was assumed to be right about the key issues, would be able to explain spiders fine. (Our view of how well an idea deals with a non-bottleneck factor is often a proxy for our judgment of a key factor – I don’t like Lamarckism’s explanation of the origin of spiders because I don’t think acquired traits are inherited in genes.)

College Rankings

College rankings are discussed in The Order of Things, an article by Malcom Gladwell about why it’s hard to usefully combine many factors into a single overall ranking score.

Many dimensions, like class size, graduation rate or prestige, come in different units with no conversions (and some dimensions are hard to measure at all). It’s not like converting inches to meters, it’s like trying to convert inches to minutes (or converting both inches and minutes to something else, e.g. grams).

The key factors for colleges vary by person/context. I want a college which is at least 1000 miles away from my parents, but you strongly prefer a local college so you can save money by not moving out. And neither of those factors can be taken into account by one-size-fits-all college rankings published nationally, even if they wanted to include them, because college seekers live in different places.

Joe has excess capacity on graduation rate. He doesn’t mind going to a school where 80% of people graduate over a school where 90% of people graduate. He’s a great student and is confident that he can graduate regardless. His parents have PhDs and he’s had exposure to professors, to what type of skills are needed to graduate, etc., so he’s in a good position to make this judgment.

Steve will be the first person in his family to go to college. He struggled in high school, both with the academics and with communicating in English with his teachers. For Steve, a college with a 99% graduation rate looks way less risky – that’s a key factor.

Key factors are situational. Kate wants a prestige degree, but Sue wants any degree at all just to satisfy her parents. Sue also wants somewhere she can live on campus with her dog.

Kate and Sue have excess capacity in different areas. Kate is so good at basketball that she can get a full scholarship anywhere, so she doesn’t care about tuition price. Sue is way less bothered by dirt and bad smells than most people, so she has excess capacity on attending a dirty, smelly college.

Some factors are about the same for everyone. They all want a college with plenty of air available to breathe. Fortunately, every single college has excess capacity on air. Even if you came and took some air away, or the college had a bad air day (where, due to the motion of gas atoms and statistical fluctuations, there were an unusually low number of air molecules on campus that day), there’d still be plenty of air left. This example is a reminder of the importance of focusing on only a few factors out of infinite factors that could be evaluated.

Physics

In science, we want our empirical theories to match our observations but not match a ton of other, logically possible observations. A law like E=hf (the energy of a photon is Plank’s constant times the photon’s frequency) is valuable in large part because of how much it excludes. It’s pretty specific. We don’t want excess capacity for the set of physical events and states allowed by the law; we prefer a minimal and highly accurate set. So that’s a key factor where we want as much as we can get (more of it translates to more success at our goal).

E=hf has excess capacity on shortness. It could be a longer formula and we’d still accept it.

E=hf has excess capacity on experimental data. We could have less data and still accept it. The data is also much more precise than necessary to accept E=hf. And we have excess documented counter examples to E=hf^7, E=hf^8, E=hf^9, and to infinitely many other rival theories.

E=hf has excess capacity on ease of use. It could be more of a hassle to do the calculation and we’d still accept it.

E=hf has excess capacity of rhetorical value. It could be less persuasive in speeches and we’d still accept it. This would remain true even if it’s rhetorical value was ~zero. We don’t judge science that way (at least that’s the aspiration).

Peter tries to debate me. No, E=Gd, he claims. What’s Gd I ask? God’s decision. But that’s not even a multiplication between G and d! This reminds me that E=hf does great on the “actually math” criterion, which normally isn’t a key factor in my discussions or thinking, but it becomes a key factor when I’m talking with Peter. Related to this, I have a bunch of excess capacity that Peter doesn’t: I could be really tired and distracted but I’d still remember the importance of math in scientific laws.

As long as Peter disagrees re using math, many other issues that I’d normally talk about are irrelevant. I shouldn’t try to debate with Peter how significant figures and error bars for measurements work. That wouldn’t address his no-math perspective; it’d be the wrong focus in the situation. It’d be a mistake for me to say that my approach has a really great, nuanced approach to measurement precision, so Peter should increase his confidence that I’m right. If I said that, he should actually become more doubtful about me because I’d be showing inflexible thinking that’s bad at understanding what’s relevant to other contexts that I’m not used to.

Minimum Wage Debate

We’re debating minimum wage. We agree that low skill workers shouldn’t get screwed over. I say minimum wage laws screw over workers by reducing the supply of jobs. You say minimum wage laws prevent workers from being screwed over by outlawing exploitative jobs.

The key factor for my claim is economics (specifically the logic and math of supply and demand in simple hypothetical scenarios). When I convince you about that, you change your mind. I should focus on optimizing for that issue. During the debate, I have excess capacity on many dimensions, such as theism, astrology or racism. I’m not even close to causing you to think my position is based on God, the stars, or race. I don’t need to worry about that. When I’m considering what argument to use next, I don’t need to avoid arguments associated with Christianity; I can ignore that factor. Similarly, I don’t need to factor in the race of the economists I cite.

There are many factors which could be seen positively in some way. E.g. economics books with more pages and more footnotes are more impressive, in some sense. This is contextual: some people would be more impressed instead by a compact, very clear book.

But we actually have tons of excess capacity on page count and footnotes. You’re tolerant of a wide variety of books. I don’t need to worry about optimizing this factor. I can focus on other factors like choosing the book with the best clarity and relevance (key factors).

If you were picky about dozens of factors, our discussion would fail. Your tolerance lets me focus on optimizing only a few things, which makes productive discussion possible.

So I convince you that I’m right about minimum wage. But next year you come back with a new argument.

Don’t government regulations make it harder to start a business and to hire people? There’s lots of paperwork that discourages entrepreneurship. This artificially reduces the supply of jobs. It prevents the supply and demand of jobs from reaching the proper equilibrium (market clearing price). Therefore, workers are actually being underpaid because they’re competing for too few jobs, which drives wages down.

Now what? You’re right that my simplified market model didn’t fully correspond to reality. The bottleneck is no longer your ignorance of basic economics. You’ve actually read a bunch and now have excess capacity there: you know more than enough for me to bring up some economics concepts without confusing you. Also you’re very patient and highly motivated, so I don’t have to keep things really short. However, you’re sensitive to insults against less fortunate people, so I have to check for something potentially offensive when I do an editing pass. I only want to share arguments with you that have excess capacity for that – they are more than inoffensive enough.

What should I do? I could defend my model and tell you all kinds of merits it has. The model is useful in many ways. There are many different ways to argue for its correctness given its premises. But those aren’t bottlenecks. You aren’t denying that. That won’t change your mind because the problem you brought up focuses on a different issue.

I judge that the bottleneck is your understanding of what effect minimum wage has on a scenario where the supply of jobs is artificially suppressed. Yes that’s a real problem, but does minimum wage help fix it? I need to focus on that. When I’m considering candidate arguments to tell you, I should look at which one will best address that (and then check offensiveness in editing), while not worrying about factors with excess capacity like your patience and motivation. All the arguments I’m considering will work OK given the available patience and motivation (yes, I could make up a tangled argument that takes so much patience to get through that it turns patience into a bottleneck, but it doesn’t require conscious attention for me to avoid that). Improvements to those factors (like requiring one less unit of patience) are orders of magnitude less important than the key factors (like creating one more unit of understanding of the effects of minimum wage on a system with a government-constrained job supply).


Elliot Temple | Permalink | Messages (2)

Chains, Bottlenecks and Optimization

View this post, with discussion, on Less Wrong.


Consider an idea consisting of a group of strongly connected sub-ideas. If any sub-idea is an error (doesn’t work), then the whole idea is an error (doesn’t work). We can metaphorically model this as a metal chain made of links. How strong is a chain? How hard can you pull on it before it breaks? It’s as strong as its weakest link. If you measure the strength of every link in the chain, and try to combine them into an overall strength score for the chain, you will get a bad answer. The appropriate weight to give the non-weakest links, in your analysis of chain strength, is ~zero.

There are special cases. Maybe the links are all equally strong to high precision. But that’s unusual. Variance (statistical fluctuations) is usual. Perhaps there is a bell curve of link strengths. Having two links approximately tied for weakest is more realistic, though still uncommon.

(A group of linked ideas may not be a chain (linear) because of branching (tree structure). But that doesn’t matter to my point. Stress the non-linear system of chain links and something will break first.)

The weakest link of the chain is the bottleneck or constraint. The other links have excess capacity – more strength than they need to stay unbroken when the chain gets pulled on hard enough to break the weakest link.

Optimization of non-bottlenecks is ~wasted effort. In other words, if you pick random chain links, and then you reinforce them, it (probably) doesn’t make the chain stronger. Reinforcing non-weakest links is misallocating effort.

So how good is an idea made of sub-ideas? It’s as strong as its weakest link (sub-idea). Most ideas have excess capacity. So it’d be a mistake to measure how good each sub-idea is, including more points for excess capacity, and then combine all the scores into an overall goodness score.

Excess capacity is a general feature and requirement of stable systems. Either most components have excess capacity or the system is unstable. Why? Because of variance. If lots of components were within the margin of error (max expected or common variance) of breaking, stuff would break all over the place on a regular basis. You’d have chaos. Stable systems mostly include parts which remain stable despite variance. That means that in most circumstances, when they aren’t currently dealing with high levels of negative variances, then they have excess capacity.

This is why manufacturing plants should not be designed as a balanced series of workstations, all with equal production capacity. A balanced plant (code) lacks excess capacity on any workstations (chain links), which makes it unstable to variance.

Abstractly, bottlenecks and excess capacity are key issues whenever there are dependency links plus variance. (Source.)

Applied to Software

This is similar to how, when optimizing computer programs for speed, you should look for bottlenecks and focus on improving those. Find the really slow part and work on that. Don’t just speed up any random piece of code. Most of the code is plenty fast. Which means, if you want to assign an overall optimization score to the code, it’d be misleading to look at how well optimized every function is and then average them. What you should actually do is a lot more like scoring the bottleneck(s) and ignoring how optimized the other functions are.

Just as optimizing the non-bottlenecks with lots of excess capacity would be wasted effort, any optimization already present at a non-bottleneck shouldn’t be counted when evaluating how optimized the codebase is, because it doesn’t matter. (To a reasonable approximation. Yes, as the code changes, the bottlenecks could move. A function could suddenly be called a million times more often than before and need optimizing. If it was pre-optimized, that’d be a benefit. But most functions will never become bottlenecks, so pre-optimizing just in case has a low value.)

Suppose a piece of software consists of one function which calls many sub-functions which call sub-sub-functions. How many speed bottlenecks does it have? Approximately one, just like a chain has one weakest link. In this case we’re adding up time taken by different components. The vast majority of sub-functions will be too fast to matter much. One or a small number of sub-functions use most of the time. So it’s a small number of bottlenecks but not necessarily one. (Note: there are never zero bottlenecks: no matter how much you speed stuff up, there will be a slowest sub-function. However, once the overall speed is fast enough, you can stop optimizing.) Software systems don’t necessarily have to be this way, but they usually are, and more balanced systems don’t work well.

Applied to Ideas

I propose viewing ideas from the perspective of chains with weakest links or bottlenecks. Focus on a few key issues. Don’t try to optimize the rest. Don’t update your beliefs using evidence, increasing your confidence in some ideas, when the evidence deals with non-bottlenecks. In other words, don’t add more plausibility to an idea when you improve a sub-component that already had excess capacity. Don’t evaluate the quality of all the components of an idea and combine them into a weighted average which comes out higher when there’s more excess capacity for non-bottlenecks.

BTW, what is excess capacity for an idea? Ideas have purposes. They’re meant to accomplish some goal such as solving a problem. Excess capacity means the idea is more than adequate to accomplish its purpose. The idea is more powerful than necessary to do its job. This lets it deal with variance, and may help with using the idea for other jobs.

Besides the relevance to adding up the weight of the evidence or arguments, this perspective explains why thinking is tractable in general: we’re able to focus our attention on a few key issues instead of being overwhelmed by the ~infinite complexity of reality (because most sub-issues we deal with have excess capacity, so they require little attention or optimization).

Note: In some ways, I have different background knowledge and perspective than the typical poster here (and in some ways I’m similar). I expect large inferential distance. I don’t expect my intended meaning to be transparent to readers here. (More links about this: one, two.) I hope to get feedback about which ideas people here accept, reject or want more elaboration on.

Acknowledgments: The ideas about chains, bottlenecks, etc., were developed by Eliyahu Goldratt, who developed the Theory of Constraints. He was known especially for applying the methods of the hard sciences to the field of business management. Above, I’ve summarized some Goldratt ideas and begun relating them to Bayesian epistemology.


Elliot Temple | Permalink | Messages (0)

Exploring Gender as a Social Construct

This question is directed to people who think gender matters for behavior and mental capabilities. Similar questions could be asked about race and other traits.

Suppose that gender is a social construct. Suppose that gendered behavior is due to just culture, not a mix of culture and genes. Suppose that women are born with equal mental capabilities to men.

If you conceded all that, what would you change your mind about, if anything? Why?

I ask this because a lot of effort is spent denying that gender is a social construct. Many right wing people are quite hostile to the social construct theory and view it as dangerous. But what negative consequences do they think it implies?

I interpret people as thinking something like "If the left was correct that gender is a social construct, then a lot of their political philosophy would be correct, and I'd have to change my mind about a bunch of stuff." I am doubtful of this and don't see that the social construct theory implies much leftist political philosophy.

If gender is a social construct, that doesn't mean it doesn't exist. Social constructs exist and matter. They can't be instantly or trivially changed or gotten rid of. Culture and memes are important.

This issue is complicated by biological differences between the genders for e.g. muscles. Men are stronger on average. The difference is significant. Reasonable people don't deny that. Try to focus your answer on basically intellectual differences, personality differences, behavior differences, mental differences, etc., which are the things that might be cultural.

Note that the anti social construct view claims that genes influence gendered mental traits, but do not fully determine them. They think a mix of biology and culture leads to gendered traits. They don't claim it's all biology. The social construct view, by contrast, denies the role of biology. It rejects the mixed factors view in favor of a single dominant factor.

For people who think gender is a social construct, I have similar question: What (classical) liberal ideas do you think that contradicts, if any?


Elliot Temple | Permalink | Messages (59)

How are people so stupid?

David Horowitz asked on Twitter:

How do people like Victoria get so stupid?

I wrote a generic reply which has nothing to do with the partisan political statements Victoria said, which I didn't even read before replying.

At age 2 her parents order her around. She doesn't understand most orders & is punished for clarifying questions. She has to try to follow orders she doesn't understand. She ends up lowering her standards for what understanding is and goes through school not understanding much.

When she does understand something about reality correctly, it sometimes actually makes things worse. She learns the world is based on authority and social status. You can't correct the people with power over you; you must try to conform to their confused view of reality.

She learns no one understands. Everyone is just pretending to understand and hiding their weakness. Her parents and teachers are confused in many ways but have power over her anyway. She aspires to gain social status and power – to move up in the system – not to be a scientist.


My main idea here is that overreaching begins due to pressure to act before one is ready. Even if the parents orders make sense and are reasonable, it doesn't matter if a kid is being pressured to act according to ideas he doesn't yet understand. It teaches him the very bad policy/habit of trying to act before you're intellectually ready and understand what you're doing well enough.

Also social status hierarchies are a big deal and very dangerous.


Elliot Temple | Permalink | Messages (0)

Robin Hanson Apologized For His Ideas

They broke Robin Hanson. http://mason.gmu.edu/~rhanson/JuneteenthApology.html

Hanson chose to be an icon and leader. Giving in is a betrayal of his followers, fans and values. It signals that you can't succeed by standing up for truth and free speech. He's discouraging them. He took on a responsibility and failed at it.

He was some sort of role model. He knew it and wanted to be. And that's why he was targeted. And then, with little fuss, RIP.

At the same time, Scott Alexander stood up to the NYT. Though, interestingly, Alexander wasn't even given the option to apologize and recant.

They gave Alexander the options fight back or not fight back, and be attacked either way using the same weapon (dox him by printing his name).

I saw something recently, forget where, about a revolution long ago, I think somewhere in China. I don't know if it's a true story or just designed to make a point. Was like:

What's the penalty for being late? Death.

What's the penalty for a revolution? Death.

So then they revolted cuz it's the same penalty anyway.

Did Hanson naively think that his job would always be safe when he criticized mainstream ideas? Did he think he lived in a society with free speech and tolerance of intellectual diversity? Or just that his particular university was especially great? I doubt it.

He ought to have known a confrontation was possible. If he wasn't prepared for the confrontation, what the hell was he doing? If his plan was to give in, he misled his readers about that.

Hanson is trying to proceed with blogging like nothing happened, without any explanation to his readers (other than the official apology, which doesn't explain it – a real explanation would be e.g. "they threatened my job, and i wanted to keep it, so i spoke out against the cause". That particular explanation would raise some questions before he was accepted back as an advocate and leader of the cause. If he has a better one, let's hear it. If he's muzzled, and can be threatened into not saying whatever the university leaders choose, then can we trust anything he blogs to be his real opinion?).

I was not much of Hanson fan anyway, but he's one of the symbols we have ... well had. I don't know of a bunch of better ones.

People should not accept him back. Don't act like this didn't happen. He's clearly compromised and there is no plan or strategy in place to enable his free and honest speech going forward. There are problems here which Hanson is trying to ignore instead of present solutions to. He's doing no post mortem. He's making no plan to be more successful next time. He's presumably just decided on a bunch of things he's no longer willing to say publicly, and he's hiding the list from his audience. And I doubt it's even a list, in writing, or that he has any policies to ensure he consistently follows his plan. He may well behave inconsistently and get in trouble more, or refrain from saying things that aren't on the list, or both, and there's no transparency.


Elliot Temple | Permalink | Message (1)