Structural Epistemology Introduction Part 2

Part 1

Last time I alluded to the most important aspect of knowledge structure: some structures are more or less resistant to being changed to have some other function (denotation). Additionally, whether a structure is easy to change to some new problem is not simply a matter of luck. Rather, some structures are better than others, because they contain more knowledge. Now I will give some illustrations.

First, let's reexamine the multiply function. What if the situation changed and we suddenly had to rewrite our multiply function with a special constraint? Such as, what if the built-in multiplication function in our programming language was no longer available? Or what if user-defined function calls suddenly became very slow and expensive? Or what if there was a problem with assignment, and we couldn't use that (basically, no equal sign allowed).

It turns out each of these problems would break one of the multiply functions so badly we would be better off starting over from scratch than trying to salvage it, and the other two wouldn't need even a single change. (If you're wondering, no built-in multiplication ruins the third multiply; no assignment ruins the iterative version; and user-defined function calls being expensive ruins the recursive version.) This demonstrates that structure makes a difference. But so far none are obviously better than others.

Next, lets imagine we were writing a program that played some game, and a few dozen times in the program we needed to refer to the number of actions each player gets per turn. And lets suppose it's 8 now, but possible this may change in future versions of the game. One thing we could do is everywhere we need to refer to the number of actions per turn, put an 8. The program will run just fine. But if we have to change the number of actions per turn later (or perhaps we'd just like to try out a different number to see how it works, to see if changing it might be a good idea at all), then we will have to go through our whole program and alter a few dozen lines of code! That's a pain, and there's a better way.

What we should do is define a constant variable, int ACTIONS_PER_TURN = 8, and then write ACTIONS_PER_TURN instead of 8 throughout our program. Then, we could very easily change the number of actions per turn by altering a single line of code. This new program using a constant variable has exactly the same denotation as the original one with 8 everywhere -- someone playing the game will never know the difference. But not only is the structure different using a constant variable, it's better because it allows significant advantages in ways it can be changed, with no disadvantage at all(1). One way to put the difference is it contains the knowledge that each of the dozens of 8's in the program is really the same thing, thus allowing them to be changed as a group.

Another example of trying to change a program, is if we had our multiply programs and wanted to do exponentiation (assume there is no built-in function for that). In that case, the program that relied on built-in multiplication is absolutely useless. Just as it would be useless to change to anything at all that wasn't built in. This reveals its structure has very little knowledge in it. On the other hand, the recursive and iterative multiply programs could both be changed to do exponentiation fairly easily. They could also be altered to do a host of other things, because each has a knowledge-laden structure. In effect, they are both set up to do work (in a certain way), and only need to be told what type. (It's not clear which one has more structural knowledge. I believe the recursive one does, but they are useful in different ways.)

So, to sum up, if we wish to change a program to do something else, depending on its structure, we may have an easy time of it, or may be totally out of luck. And furthermore, some structures are better than others, because they contain more knowledge.


(1) It will run negligibly slower, or compile negligibly slower in a compiled language. And I mean negligibly.

PS I understand that if you knew that, for what you were doing, certain structural knowledge was entirely unnecessary, and never would be useful, you might intentionally leave it out, and say this was a better design. However this is very rare on anything but the most trivial project, and does not ruin the idea of better structures. It's just like, if I was trying to learn physics, I might not need an economics lecture. But we can still say economics has useful, true knowledge, and that there is better and worse economic knowledge.

To be clearer, the objection I fear goes, "Constants are nice, if you're going to change them, but if you aren't, using them is a waste of time, therefore which structure is better depends entirely on the problem at hand, and thus better is only a relative term for knowledge structures." This is wrong. It is equivalent to saying, "The laws of supply and demand are nice, if you're learning about economics, but if you aren't, learning them is a waste of time, therefore whether hearing the laws of supply and demand or nothing is better depends entirely on the problem at hand, and thus better is only a relative term for economic theories." In both cases the 'and thus' clause simply does not follow. Just because we might not want a bit of knowledge this instant does not make it equivalent to no knowledge, or make its value relative.

PPS Mad props to David Deutsch, 'cause he's cool.


Elliot Temple | Permalink | Messages (2)

Popper Is Fallible

(If the quotes don't have a blue background, hit refresh. If they still don't, go here, refresh that, and then come back and try again.)

I just read a little of The Myth of the Framework by Karl Popper. I noticed two oversights I thought were worth pointing out. Both quotes are from page 175, and the first immediately precedes the second.

If we eliminate from language ambiguous terms like 'yesterday', a term which today means something different from what it will mean tomorrow, and if we take some further similar precautions, then it follows from Tarski's theory that every statement in this purified language will be either true or false, with no third possibility.

The issue Popper is worried about is evaluating whether the statement "Yesterday was Sunday." is true. He thinks this will be ambiguous, because it depends on what day we evaluate it. And his solution is to purify our language by removing all terms with variable meaning (presumably all pronouns too).

But this is very silly. All we have to do to decide if "Yesterday was Sunday." is true is to substitute in referenced concepts before saving the sentence for later evaluation when the references might not work any longer. What I mean is here 'yesterday' means 'the day before November 17, 2003'. The day before November 17, 2003 will always be Sunday whenever we evaluate the sentence. (And even if our calendar system should change, the meaning and truth of the sentence will not.) So, no purified language is necessary, if we will only bother to pay attention to the actual content of the sentence (alternatively, we could keep the form of the sentence exactly the same, but save with it all relevant data, such as in this case the date it was written).

Moreover, we can have an operation of negation in our language such that if a proposition is not true, then its negation is true.
This shows that of all propositions one half will be true and the other half false. So we can be sure that there will be lots of true propositions, even though we may have great trouble in finding out which they are.

I think this is actually quite funny. Yeah, there are lots of true propositions when you include the negation of false propositions... But most of them are things like, "I did not go to England yesterday," and "My house is not painted red," and "My name is not Fred." In reality, it makes sense to say there are a lot more ways to be wrong than to be right.


Elliot Temple | Permalink | Messages (0)

on skool

Teacher: A person who talks in other people's sleep.

What's long and hard and fucks little girls? Elementary school.


Elliot Temple | Permalink | Messages (5)

Elliot Temple | Permalink | Messages (0)

example for previous entry

just read this about Soros, a billionaire who's giving away money trying make Bush lose the 2004 election.

if a democrat was consistent with his ideals, he'd be poor-ish. cause his ideals include misunderstanding economics, opposing business, and wasting money. but Soros is rich. how'd that happen? well, he's inconsistent.

if all democrats were consistent they'd be a crappy political force. but they manage to find people who somehow, inconsistently, are democrats who are good at this or that thing that the rest of the democrats can't manage. and this way they can end up with some rich supporters despite their ideology, and thus be more dangerous.


Elliot Temple | Permalink | Messages (5)

I wonder if the category should be epistemology or morality

Tom Robinson is now officialy my coolest reader. He commented as follows WRT inverse theory:

I'm slightly fuzzy about this inverse world view. Is it wrong about everything, or just some things, or just incoming morally-weighted facts? I mean, The Emperor knows that 0+1=1, so if he starts with no Death Star and then builds one new Death Star, then he'll end up with ... a Death Star. He knows this to be true despite being the epitome of evil.
To start, I deny The Emperor actually is the epitome of evil, or even all that close. But anyway, I would say if we have propositions A, B, and C, and A and B are consistent with each other. And C contradicts A. This implies that C and B somehow contradict. There aren't multiple ways to hold B and be consistent, so if A really is consistent with B and inconsistent with C, then B must be inconsistent with C. This follows directly from the idea that there is one truth.

To put in real propositions, B states 0+1=1. A states that we shouldn't murder Jews. I propose A and B are consistent. C states that we should murder Jews. I propose A and C are inconsistent. I conclude that B and C are inconsistent -- that wanting to murder Jews and doing math right contradict. This works with any form of being evil and math.

It is hard to see what the inverse worldview looks like. It is foreign to us, and most of its twisted logic beyond our worst nightmares. We get glimpses in the bad people of our world, but they are nowhere near the limits of evil.

Good people are succesful and flourish. Bad people, therefore, are unsucessful and do not flourish objectively, even if they think they do (or perhaps they think flourishing is bad, and think they do not flourish). I believe, in the limit, evil people would be unable to eat meals, or otherwise manage to even stay alive.

My explanation of why the bad people of our world manage to eat, and even manage to use creativity to plan nasty attacks, is that they are inconsistent. Much of their worldviews are true. They use the true bits to function. But they also have a significant, inverse portion, from which they take many of their goals and motives.

Notably, it is this inconsistent combination that allows them to be truly dangerious. An evil person who uses some true ideas to get what he wants is more threatening than an evil person who's own evil has rendered him impotent.


Elliot Temple | Permalink | Message (1)

On Charity

A common point of disagreement in political discussions is about human nature. Some people say that men should make their own choices, and control their own money. And believe that only good will come of freedom. Others would retort that the rich will have more choices, and abuse them to gain more power. Or at least assert that some people will be left behind without help through sheer bad luck (or not having a level playing field). And that generosity is not natural, so the government must step in to help.

Roughly, right wing people take the first view, and favour free markets, small government, and people deciding for themselves how charitable to be. And, roughly, left wing people don't trust humans to be charitable or fair without being controlled by government.

So when a right winger says he isn't against helping people, he just wants to decide how best to do it, and make sure his charity is effective (the government, he will say, is wasteful and spends charity money badly), a left winger will likely scoff. The left winger will think this is just a trick to get out of giving any charity at all. Because the left winger trusts his government to do everything right, he will see any attempt to pay less taxes or avoid forced charity as, clearly, a selfish attempt to get out of paying one's fair share or to get out of helping other people.

So, who's right?

Well, I've got a way to find out. Despite high tax levels (paid by both left and right wing), it is commonplace to give additional money, by choice, to charities. Now, if the left is correct, we should observe that the greedy right wingers donate very little to charity. But if the right is telling the truth that they are happy to give money to charity, as long as they pick which charity, and give money in ways they feel are effective, then we will observe, despite taxes, that right wingers do choose to donate significant amounts of money to charity.

The following table ranks each state by how generous it is. This was determined by taking into account the amount of money donated to charitable organisations, and also how rich the people in that state are. In other words, one gets a high ranking by giving a large portion of what he has. The states are color-coded. Red states voted for Bush in the 2000 election (they're, to decent precision, right wing). Blue states voted for Gore. I believe the table speaks for itself. (Thanks to The Rantblogger for the table.)

  1. Mississippi
  2. Arkansas
  3. South Dakota
  4. Oklahoma
  5. Alabama
  6. Tennessee
  7. Louisiana
  8. Utah
  9. South Carolina
  10. Idaho
  11. North Dakota
  12. Wyoming
  13. Texas
  14. West Virginia
  15. Nebraska
  16. North Carolina
  17. Florida
  18. Kansas
  19. Missouri
  20. Georgia
  21. New Mexico
  22. Montana
  23. Kentucky
  24. Alaska
  25. New York
  1. Indiana
  2. Iowa
  3. Ohio
  4. California
  5. Washington
  6. Maine
  7. Maryland
  8. Hawaii
  9. Delaware
  10. Illinois
  11. Pennsylvania
  12. Connecticut
  13. Vermont
  14. Virginia
  15. Oregon
  16. Colorado
  17. Arizona
  18. Michigan
  19. Nevada
  20. Wisconsin
  21. Minnesota
  22. Massachusetts
  23. New Jersey
  24. Rhode Island
  25. New Hampshire

Elliot Temple | Permalink | Messages (17)

Structural Epistemology Introduction Part 1

Imagine you are handed a black box. You can't open it, but on one side is an input mechanism, and on the other side is an output mechanism. For example, the input mechanism might be a keyboard, and the output a display screen. The box, somehow (you don't know the inner workings) maps inputs to outputs. That means if you give it an input, it figures out what output to give back, according to its inner workings. And for simplicity, assume the box is in no way random. For a given input, it always gives the same output.

Now, imagine someone gives you a second black box. And you test both out, and discover that for any input, both boxes give the same output. You test every single allowed input, and they always give the same answer. (The word I will use for this is: the two boxes have the same denotation). Now, the question is: do the boxes do the same thing? Do they contain the same knowledge?

Well, of course it's possible that they do. They might be the same inside. But can we be sure? Just because they always answer the same way, can we tell they definitely do the same thing? And either way, can we say they definitely have the same knowledge?

I'd like to apologise to non-programmers now. The following examples will probably look like gibberish to you. But read the English around them, and I think my point should still make sense.

Here are three different ways to do a multiply function. They all accurately multiply any integers. They have the exact same domain (allowed input), the same range (possible outputs), and they map (relate) the same elements of the domain (inputs) to the same elements of the range (outputs).

// iterative multiplication
int multiply(int a, int b)
{
    int total = 0;
    if (b > 0)
        for(int j=0; j<b; j++)
            total += a;
    if (b < 0)
        for(int j=0; j>b; j--)
            total -= a;
    return total;
}

// recursive multiplication
int multiply(int a, int b)
{
    if(b == 0)
        return 0;
    if(b > 0)
        return (a + multiply(a, b-1));
    if(b < 0)
        return ( (0 - a) + multiply(a, b+1));
}

// multiplication using a built-in function
int multiply(int a, int b)
{
    return a*b;
}

As you can see, even if you don't understand the code, all three are written differently. I assure you, however, they do give the same answers. Now, remember the black box I talked about? Well, lets say you have three that all do integer multiplication. The inner workings could be the three functions I just showed.

Do each of the black boxes do the same thing? No. Each uses a different procedure to find its answer. Like if you wanted to get from California to New York, you might go through Canada, through Mexico, or stay in the US the whole way. Each trip would start and end in the same place, but they'd certainly be different trips.

But the key question is whether each black box, or each multiply function, which has the exact same denotation, has the same knowledge.

I propose they do not. While they have the same denotation, I would say they have different knowledge structure. And to see why this matters, and makes a great difference: Alright, the boxes have the same functionality (namely multiplication) now, but what if we want to alter them? If we want to change their denotation, even just a little bit, then knowledge structure makes all the difference.

To be continued...

PS: I'm aware that I'm not using 'denotation' in the standard, dictionary way.

Note: David Deutsch explained much of what I know about structural epistemology to me. Kolya Wolf explained some too, and also Kolya originally thought of the idea.

Part 2

Elliot Temple | Permalink | Messages (7)