Elliot Temple | Permalink | Messages (6)

Programming and Epistemology

Organizing code is an instance of organizing knowledge. Concepts like being clear, and putting things in sections, apply to programming and philosophy both.

DRY and YAGNI aren't just programming principles. They also apply to thinking and knowledge in general. It's better to recognize a general case, then think of several cases in separate, repetitive ways. And it's better to come up with solutions for actual problems and not create a bunch of over-engineered theory that may never have a purpose.

The programming methodology of starting with the minimum thing that will work, and then making lots of little improvements to it until its awesome -- based in part on actual feedback and experience with the early versions -- is also a good general method of thinking connected to gradualist, evolutionary epistemology. It's also how, say, political change should be done: don't design a utopia and then try to implement it (like the French Revolution), instead look for smaller steps so it's possible to change course mid way once you learn more about it, so you get some immediate benefit and to reduce risk.

Programmers sometimes write articles about how evil rewrites are, and how they lead to vaporware. Nothing is ever perfect, but existing products have a lot of useful work put into them, so don't start over (you'll inevitably run into new, unforeseen problems) but instead try to improve what you have. Similarly, philosophically, there are three broad schools of thought:

1) the conservative approach where you try to prevent any changes.

2) the liberal approach where you try to improve what you have.

3) the radical approach, where you say existing ideas/knowledge/traditions are broken and worthless, and should be tossed out and recreated from scratch.

The liberal, non-revolutionary approach is the right one not just for code rewrites but also in philosophy in general (and in politics).


Consider two black boxes which take input and give output according to some unknown code inside. You try them out, and both boxes give identical output for all possible inputs. You wonder: are the boxes identical? Are they the same, for all practical intents and purposes? Must they even be similar?

Programmers, although they don't usually think about it this way, already know the answer. Code can be messy, disorganized, and unreadable, or not. Code can have helpful comments, or not. One can spend a day refactoring or deleting code, and make sure all the tests pass, so it does exactly the same thing as before, but now it's better. Some code can be reused in other projects, and some isn't set up for that. Some code has tests, and some doesn't. One box could be written in C, and another in lisp.

None of these things matter if you only treat code as a black box and just want to use it. But if you ever have to change the code, like adding new features, doing maintenance or doing bug fixes, then all these differences which don't affect the code's output are important.

I call what the code actually does its "denotation" and the other aspects its "structure", and I call this field structural epistemology. Programming is the best example of where it comes up, but it also has more philosophical relevance. One interesting question is if/how/why evolution creates good structure in genetic code (I think it does, but I'm not so clear on what selection pressure caused it). Another example is that factories have knowledge structure issues: you can have two factories both making toys, with the same daily output, but one is designed so it's easier to convert it to a car factory later.

Elliot Temple | Permalink | Messages (10)

Mises on Force and Persuasion

Liberalism in the Classical Tradition by Ludwig von Mises, p 51
Repression by brute force is always a confession of the inability to make use of the better weapons of the intellect
This is similar to Godwin:
If he who employs coercion against me could mould me to his purposes by argument, no doubt he would. He pretends to punish me because his argument is strong; but he really punishes me because his argument is weak.

Elliot Temple | Permalink | Messages (0)

Milton Friedman was a Statist

Now you know.

http://www.hoover.org/multimedia/uk/3411401.html

Edit:

In the interview, he expresses disagreement with Ayn Rand and her view that the State is bad because it uses force against its citizens. He does not provide any argument that she's mistaken, or that his view is better.

Milton also, for example, advocated a negative income tax. That means if you contribute a sufficiently small amount to the economy then the State takes money by force from other citizens and gives it to you.

The purpose of this post is simply to inform people about how a libertarian icon is a blatant Statist. (And, by the way, he's not the only one.)

Elliot Temple | Permalink | Messages (3)

Beyond Criticism?

The Retreat To Commitment, by William Warren Bartley III, p 123:
There may, of course, be other nonlogical considerations which lead one to grant that it would be pointless to hold some particular view as being open to criticism. It would, for instance, be a bit silly for me to maintain that I held some statements that I might make—e.g., "I am over two years old"—open to criticism and revision.

Yet the fact that some statements are in some sense like this "beyond criticism" is irrelevant to our problems of relativism, fideism, and scepticism.
The claim that some statements are beyond criticism is anti-fallibilist and anti-Popperian.

It is not at all silly to maintain that the example statement is open to criticism. It's essential. Not doing so would be deeply irrational. We can make mistakes, and denying that has consequences, e.g. we'll wonder: how do we know which things we can't be mistaken about? And that question begs for an authoritarian, as well as false, answer.

You may be thinking, "Yes, Elliot, but you are over two years old, and we both know it, and you can't think of a single way that might be false." But I can.

For example, my understanding of time could contain a mistake. Is that a ridiculous possibility? It is not. Most people today have large mistakes in their understanding of time (and of space)! Einstein and other physicists discovered that and space are connected and it's weird and doesn't follow common sense. For example, the common sense concept of two things happening simultaneously at different places is a mistake: what appears simultaneous actually depends where you watch from. If some common sense notions of time can be mistaken, why laugh off the possibility that our way of keeping track of how much time has passed contains a mistake?

Another issue is when you start counting. At conception? Most people would say at birth. But why birth? Maybe we should start counting from the time Bartley was a person. That may have been before or after birth. According to many people, brain development doesn't finish until age 20 or so. In that case, a 21 year old might only have been a full person for one year.

Of course there are plenty of other ways the statement could be mistaken. We must keep an open mind to them so that when someone has a new, counter-intuitive idea we don't just laugh at him but listen. Sure the guy might be a crank, but if we ignore all such ideas that will include the good ones.

Elliot Temple | Permalink | Messages (94)

Another Problem Related To Critical Preferences

X is a good trait. A has more of X than B does. Therefore A is better than B.

That is a non sequitur.

You can add, "All other things being equal" and it's still a non sequitur.

X being a good or desirable trait does not mean all things with more X are better. There being all sorts of reasons X is amazing does not mean X is amazing in all contexts and in relation to all problems.

You'd need to say X is universally good, and all other things are equal. In other words, you're saying the only difference between A and B is amount of something that is always good. With the premises that strong, then the claim works. However, it's now highly unrealistic.

It's hard to find things that are universally good to have more of. Any medicine or food will kill you if you overdose enough. Too much money would crush us all, or can get you mugged. An iPhone is amazing, but an iPhone that's found by a hostage taker who previously asked for everyone's phones can get you killed.

You can try claims like "more virtue is universally good". That is true enough, but that's because the word "virtue" is itself already context sensitive. It's also basically a tautology and immune to criticism, because whatever is good to do is what's virtuous to do. And it's controversial how to act virtuously or judge virtue. If you try to get specific like, "helping the needy is universally good," then you run into the problem that it's false. For example, if Obama spent too much time working in soup kitchens, that wouldn't leave him enough time to run the country well, so it'd turn out badly.

You could try "more error correction is a universal good thing" but that's false too. Some things are good enough, and more error correction would be an inefficient use of effort.

You might try to rescue things by saying, "X is good in some contexts, and this is one of those contexts." Then you'll need to give a fallible argument for that. That is an improvement on the original approach.

Now for the other premise, "all other things being equal." They never are. Life is complicated and there's almost always dozens of relevant factors. Even if they were equal, we wouldn't know it, because we can never observe all other things to check for their equality. We could guess they are equal, which would hold if we didn't miss anything. But the premise "all other things being equal, unless I think of some possible relevant factor" isn't so impressive. You might as well just say directly, "A is better than B, unless I'm mistaken."

Elliot Temple | Permalink | Message (1)

Examples of Accepting Contradicting Ideas

People commonly say things like, "That's a good point, but alone it's insffucient for me to change my position."

In a debate club meeting, or a Presidential debate, most of the non-partisan audience usually comes away thinking both sides made some good points.

Debaters think an idea can suffer a few setbacks, but still be a good idea. They aren't after perfection but just trying to get the better of their debating opponent.

These are examples of the same mistake underlying critical preferences: simultaneously accepting two conflicting ideas (such as a position, and a criticism of that position).

PS Notice that "simultaneously accepting two conflicting ideas (and making a decision about the issue)" would be a passable definition of coercion for TCS to use. This highlights the connection between coercion and epistemology. The concept of coercion in TCS is about when rational processes in a mind break down. The TCS theory of coercion tries to answer questions like: What happens then? (Suffering; a big mess.) What causes the breakdown to happen? (Different parts of the mind in conflict and the failure to resolve this by creating one single idea of how to proceed.) What's a description of what the mind looks like when it happens? (It contains conflicting, active theories.)

Elliot Temple | Permalink | Messages (0)

Weak Theory Example

T1 is a testable, scientific theory to solve problem P. T2 is a significantly less testable theory to solve P. In Popper's view, barring some important other consideration, if both T1 and T2 are non-refuted then we must prefer T1 and say it's better.

But T1 might not be better. You could easily choose T1 so it's false and T2 so it's true as best we know today, without contradicting the situation description.

You can assert that T1 is better, as far as we know, given the current state of knowledge. But is it? Where is the argument that it is? This looks to me like both explanationless philosophy and positive philosophy (T1 is supported by its testability, and T2 isn't). T2 is losing out without any criticism of it.

What we should do is not say T1 is better, but say: T2 needs to be testable to be a viable theory because X. X can be a generic reason such as scientific theories should be testable and P is a scientific problem. Once we say this, we are now making a criticial argument: we're criticizing T2. This offers T2 the chance to defend itself, which never came up in the original analysis.

It's now up to T2 to offer a reason that it doesn't need to be more testable, or actually is more testable. T2 can criticize the criticism of it, or be refuted. (BTW if T2 didn't already contain this reason, and it has to be invented, then T2 is refuted and T2b is now standing, where T2b consists of the content of T2 plus the new content that criticizes this criticism of T2.)

Then if the testibility criticism is criticized, it can either be refuted or be ammended to include a criticism of that criticism. And so on. This approach takes seriously the idea that we only learn from criticism. That makes sense because criticisms are error-correcting statements: they explain a flaw in something, which helps us avoid a mistake.

Elliot Temple | Permalink | Message (1)