Popper Mistaken About Physical Determinism

_Objective Knowledge_ by Karl Popper, p 221
physical determinism implies that every physical event in the distant future (or in the distance past) is predictable (or retrodictable) with any desired degree of precision, provided we have sufficient knowledge about the present state of the world.
This is false. Physical determinism does not imply that we can calculate what the past was like based on the present.

The reason is that some functions are not reversible. Knowing the function used, and the output, does not let you calculate the input.

An example is addition. If you know two numbers were added, and the result was four, you cannot work out what the original numbers were. The output of addition has less information than the input.

To predict the past based on the present, one needs to posit both physical determinism and that all the laws of physics are reversible.

Elliot Temple | Permalink | Messages (4)

Elliot Temple | Permalink | Messages (0)

Bayes and Induction

Here is a question for people who think Bayes' theorem holds answers for epistemology. Suppose we have a coin. We estimate the prior probabilities of heads and tails at 50% each. We flip the coin 5,000 times. They all come up tails. Now we want to update our estimates of the probabilities of heads and tails. If we flip it again, what should we estimate the chance of another tails is?

This is a very generous question. Choosing prior probabilities is itself a serious issue, but we grant that. 5,000 data points, all with precise, unambiguous results, is not common in daily life. Plus the data can be summed up nicely and has a strong, easy to analyze pattern. And coin flipping is especially suited to a Bayesian approach. It's just as generous as a problem about picking different colored marbles out of a bag. And I don't ask for an explanation, only for a new probability estimate, which is again what Bayes is all about.

But I don't think Bayesians can answer this (or any harder question). If one tries, here is what you say to them next: "Would you agree that some parts of what you just said are not implied by Bayes' theorem, but are extra things you've added?" When they agree they've stepped a little beyond the bounds of the formula itself, then you can ask them about how much of their procedure for answering the question is Bayes' formula. And ask about where this extra part is coming from, and where to find a rigorous statement of how it works, and so on.

Now, here is a scenario for inductivists. I have a Rails application with a memory leak. I want to find the leak and fix it. How do I do it? You have a theory of knowledge, which is supposed to (along with deduction) explain how knowledge is created, right? So tell me how to create knowledge of my memory leak. Tell me how to solve a real problem.

I can repeat the test code which causes the leak thousands of times if you want. And I can run code that doesn't cause the leak thousands of times. You can have all the repeated observations you want. But I don't see how that will help. Tell me, Mr. Inductivist, how will repeating these observations help anything? Should I get different Rails applications, perhaps thousands of them, and see if they leak memory? I can do that, but is it really going to figure out where the bug in my program is?

Here is how I actually find memory leaks. I make guesses about where the problem might be, and then I think of ways to test whether I'm right or wrong. For example I guess it's in a certain section of code, then I delete that section and run the application and see if the leak goes away or not. Just like Popper said: guesses and criticism, trial and error.

I also run some programs to get statistics. What statistics? The ones I guess might be useful, such as a list of the most numerous objects in memory. How do I get from this list to figuring out which code is to blame? Sometimes I don't. Other times I think "Oh, lots of widgets, well I think I know where we create a lot of widgets" and I come up with a guess about which code is probably making them. None of this follows the inductivist model where you make repeated observations (of what? Just run the same exact thing over and over? If not, then how do you decide what to observe?) and then infer the answer from the observations (so i observe the leak every day for 3 years, and write down what happened each time, and then somehow I infer from this what the problem is? That "somehow" is very vague. That's where induction falls flat.)

One of the general patterns this post illustrates is that bad philosophy can be dealt with by asking it to be effective. Asking to see it in action. Even just in simple, realistic examples.

See also: Popper on Bayesians

Elliot Temple | Permalink | Messages (3)

Misreading Popper

I think the majority of people who say they agree with Popper read him like this:

Popper says there is no verification or justification. They think "That sounds stupid. I must have misread it. Let me try to find a better interpretation." (And, by the way, if they didn't know to do that in advance, they might well learn it while reading Popper, since he's a big advocate of seeking the best possible version of arguments one encounters.)

So, they try to think of a way to change Popper's statements about epistemology to be more reasonable. They end up interpreting his statements to be consistent with the only epistemology that makes any sense to them: the prevailing one.

And so they go through the whole book interpreting everything Popper says as advocating justificationism, induction, and the theory that knowledge is justified, true belief.

They perhaps take Popper to be making some critiques of it, for example they might notice Popper denies that infallible justifications are possible, but overall they take him to fundamentally agree with justificationism. What else could he be saying? As far as they can see, it's the only possible, conceivable approach to epistemology. One can't disagree with it; it's the manifest truth of how to think. Anyone who disagrees with it wouldn't be able to think; he'd end up in an insane asylum.

Let me emphasize: I think a majority of people who claim to agree with Popper do not. That's how hard communication and persuasion are, and how different two people who think they "agree" often are. Popper knew people are different and persuasion is hard, but I don't think he ever said it this strongly.

Elliot Temple | Permalink | Messages (0)

Authoritarian Irony

http://philosophy.suite101.com/article.cfm/critical_rationalism
One of Popper's students at the London School of Economics was William Warren Bartley III. According to Rafe Champion, Bartley, along with Popper, recognizes "the authoritarian way of thinking which charactorizes Western thought.
The author understands that Popper was opposed to authoritarian ways of thinking.

What he doesn't understand is that citing Rafe Champion as an authority on what Popper and Bartley's positions are is itself an instance of authoritarian thought. And a very archetypical one at that.

Elliot Temple | Permalink | Messages (5)

Refinements of inductive inference by Popperian machines

ftp://ftp.udel.edu/pub/people/case/kyb-final.ps

Most of this paper is code/math rather than philosophy. I am only criticizing its philosophy. It may be a very good paper within its field.
Consider a real world phenomenon f that is being investigated by an agent M. M performs discrete experiments x on f. For example, x might be a particle diffraction experiment and f(x) the resultant probable distribution on the other side of the diffraction grating. By a suitable encoding of the experiments and results we may treat f as a function from = {0,1,2,...}, the set of natural numbers, to N. A complete explanation for f is a computer program for f. Such a program for f gives us predictive power about the results of all possible experiments related to f. We are concerned about the theoretical properties of the agents which attempt to arrive at explanations (possibly only nearly correct) for different phenomena. In what follows we will conceptualize such agents as learners (of programs for function).

An inductive inference machine (IIM) is an algorithmic device which takes as its input a graph of a function N -> N, an ordered pair at a time, an, as it is receiving its input, outputs computer programs from time to time.
The use of the word 'explanation' here is not how Popper uses it. This is because they are not philosophers and are not doing philosophy. I am not faulting them for that, but I was rather hoping for a critique of Popper's philosophy, which this is not. I discuss the word 'explanation' more later.

The use of the word 'induction' here *is* how I use it. Their use of induction here has *data first*, and then "explanations" are created second, based on the data.

They assert their machines arrive at "explanations" (computers programs) which are correct or nearly correct using this inductive approach. This is, at least according to Popperian philosophy, impossible. Here are some reasons:

Any finite set of data is compatible with infinitely many theories. Only one is correct. The machine has no way to judge which theories are better than others. Therefore the machine cannot succeed. (Note: if we did have a theory telling us how to judge which are better than others, that would no longer be induction because all the content would be coming from this theory and not by induction from the data.)

There is no way to generalize data points into a theory. Imagine the data points on a 2-dimensional graph. A theory is a line on the graph (or the function which generates that line, if you prefer). I don't mean a straight line, it can curve around or whatever. A theory *consistent with the data* would have to go through every point. There are infinitely many ways to draw such a line. Any portion of the line between any two points, or after the last point, or before the first, can go absolutely anywhere you feel like on the graph at your whim while remaining fully *consistent with the data*. The points can be connected in any order. The data points provide no useful restrictions at all on which theories (lines) are possible. (IIRC this argument is in _The Logic of Scientific Discovery_).

Some people would say you should draw the most smooth line between the points, and avoid the bendy ones. This kind of sounds nice in English. But it's not so easy when you deal with real theories, especially philosophical theories. What is the smooth line to tell me the right theory about the morality of stealing? But also consider a field which has lights which are turned off at 6am, and turned on at 6pm, every day. If you make observations at noon and midnight daily, and draw straight lines between the data points, you will predict the lights are partially on in between your observations, which is wrong. When it comes down to it, no one has ever made a general purpose theory of this sort (draw the smooth line) which works.

Further, the smooth line theory involves having a *theory first* (about the type of line most likely to correspond to a true theory), and then making guesses *based on the theory* and interpreting the data in light of the theory. So it's not really induction anymore.

In other words, because (following Popper) induction does not work, the inductive inference machine will not work.


In the paper's abstract it asserts the way the induction machines are "Popperian" is that they make use of Popper's "refutability principle". Later the paper says:
Karl Popper has enunciated the principle that scientific explanations ought to be subject to refutation[23]. Most of this paper concerns restrictions on the machines requiring them to output explanations which can be refuted.
Unfortunately the word "explanation" in Popper's principle has a different meaning than the "explanations" which the machines output. In fact they are not creating any explanations at all in Popper's sense. So their machines do not follow his principle (except perhaps by loose analogy or metaphor).

Their sense of "explanation" is a correct program, i.e. one that can *predict* data points. But Popper's idea of an explanation is an English statement to *explain the underlying phenomenon*, not just to make predictions. The idea that scientific explanations are nothing more than instruments for making predictions is *instrumentalism*. You can find criticism of instrumentalism in _The Fabric of Reality_ chapter 1. Also in Popper (various places, like OK p 64-65).

The paper also talks about the reliability of their inductive inference machines. Their approach is justificationist. They attempt to *establish* the reliability of some knowledge (not as absolutely certain truth, just as reliable/partially-certain/supported/weakly-justified whatever you want to call it). This is anti-Popperian. They do not provide criticism of Popper's arguments on the subject.

Elliot Temple | Permalink | Messages (0)

Comments on Philosophy of Biology versus Philosophy of Physics by William W Bartley III

http://www.the-rathouse.com/Philosophy_vs_Philosophy_of_Physics.pdf
Let me ask you for a show of hands. How many of you will agree that you see me? I see a number of hands - so I guess insanity loves company. Of course, you don't "really" see me. What you "see" is a bunch of pieces of information.., which you synthesize into a picture image... It's that simple.

...

Representationalism, the commonsense position which Bateson appears to criticize in our example, and which is rejected outright by Machian philosophy, is also the position of many of the founders of the western scientific tradition - including Galileo, Boyle, and Newton.
This is strange. The example says you see *information*. That means the picture image you create is based on real empirical information. It's not a fantasy. This is consistent with common sense. Common sense has no problem with the mechanism of sight being that information comes to you eyes, on photons, and then is processed by your brain, as long as the conclusion of the story is that you end up with a roughly accurate picture of the real world.
As Newton wrote: "In philosophical disquisitions we ought to abstract from our senses and consider things themselves, distinct from what are only sensible measures of them". Such representationalism maintains that the members of Bateson's audience - at least those that had vision - did see Bateson (at least if he was there).
This is just toying with terminology. There's no substantive difference. The example defines the word "see" so "seeing" photons isn't "seeing" people. That's mildly silly, I guess, but it wasn't the focus of the example. The point was to consider the specific mechanism by which we see. And that matters sometimes, e.g. for considering where errors can creep in.


Reading on, it looks like this example was badly chosen and "presentationalism" is actually about saying we learn about appearances rather than objective reality. That is not a claim about the mechanism of sight, and isn't illustrated by the example above. The example even *contradicts it* by saying there is information (implicitly from objective reality, what other sort of information is there?) coming to our eyes.

The reason all this stood out to me is the example is *true* on the face of it. That *is* how sight works. But Bartley takes it to be deeply wrong. I think it's important to be able to accept the accurate view about how eyesight works without thinking it has bad philosophical consequences. And mixing in differing accounts of sight (ambiguous statements on this issue continue) blurs the philosophical issues the paper is really about.
What explains the appeal of presentationalism to contemporary physicists and philosophers of physics? Part of its appeal no doubt consists just in the fact that, being contrary to common sense, it enjoys the possibility of being sophisticated.
I don't think one should put condescending psychologizing of one's opponents in serious papers.
Preoccupied with the avoidance of error, they suppose that, in order to avoid error, they must make no utterances that cannot be justified by -i.e., derived from - the evidence available. Yet sense perception seems to be the only evidence available; and sense perception is insufficienfly strong, logically, to justify the claim about the existence of the external world, or about the various laws and entities of science, such as atoms and forces. The claim that there is an external world in addition to the evidence is a claim that goes beyond the evidence. Hence claims about such realms are unjustifiable. Worse, many presentationalists argue that they are intrinsically faulty: they are not genuine but pseudo-claims; they are indeed meaningless. For a word to have a meaning, they say, it must stand for an idea: that is, for a perception or for a memory of a perception. Since there can be no perception of any realitybeyondperception, there can be no idea of it, and hence no meaningful language "about" it.
Wow! Now I see what Bartley has a problem with! He should have put this earlier. This section is good and clear.
Mach and his students against atomic theory. The best known of these problems relates to entropy. The second law of thermodynamics asserts the existence of irreversible processes. Thus differences of density, temperature and average velocity disappear, but do not arise, by themselves: entropy always increases. Yet it was difficult for atomic theory to explain processes of this sort: for in classical mechanics all motion is reversible. Hence it could be argued, as the physicist Loschmidt did, that heat and entropy simply could not involve mechanical motion of atoms and molecules. Boltzmann' s work, by contrast (like Maxwell's in Britain), was directed to explaining entropy statistically in terms of atomic theory.
The problem of entropy brought up here is interesting. Is it solved?
Their view has been named "evolutionary epistemology" by the distinguished American psychologist Donald T. Campbell. It is an approach, based on biological and physiological research, which is utterly at variance with presentationalism.
I disagree. Evolution is not based on research, it's a philosophical non-empircal, non-scientific theory. It's a statement about the logic of what happens in a certain category of situations.

Evolutionary theories of the history of the Earth/species/humans are based on research, and are scientific. But the research and empirical part of those isn't relevant to the philosophical issues at stake. It's specious to draw on that research for authority in a philosophical debate.
Until recently, there has been even less appreciation that Darwinian evolutionary mechanisms and western epistemologies could be compared - let alone that they conflict radically.
*If* there was a different way of creating knowledge, *then* it could be used to explain the observation that knowledge (in the form of animals, etc) has been created on Earth.

Christians know this perfectly well. *If* God is a good explanation for how knowledge is created -- if he can survive the philosophical debate -- *then* he can explain all the data (that knowledge was created) just fine. If you accept the God explanation, the data is not problematic.

So it's wrong to say that biological evolution research contradicts rival epistemologies. If there were any rival epistemologies, they could very possibly account for the data and there's no way to say in advance that they couldn't. And even if they couldn't account for the data, it still wouldn't contradict them. If I say you can create knowledge doing X, and this can't explain the origin of species only other types of knowledge creation, then maybe there's two or more ways knowledge can be created. There's no contrdiction.

The real problem with alternative epistemologies is being bad explanations, not failing to account for some data. Their real problem is there are no rival epistemologies that *have some other explanation of how knowledge is created*. Or in other words, there exist no rival epistemologies that are remotely serious -- they are all missing the core idea an epistemology should have!

They shouldn't feel too bad. The Problem of Design was a big, hard problem. So far there's only one proposed solution that has ever gotten anywhere. It's the difficulty of solving the philosophical problem of design in new ways that keeps rival epistemologies down, not any empirical data.

Oh, and Darwin didn't explain this stuff, so the paper shouldn't be talking about him at all.

The paper goes on to give actual animal examples. Birds and trees and stuff. meh. I think he'd be a lot better off explaining the philosophical theory of evolution (which surprisingly few people understand well) instead.
A good example is winetasting: the connoisseur knows what to look for and how to describe both what he searches for and what he experiences. His sensations are, as a result of cultivation, made more authoritative.
Errr, Bartley thinks people's statements or ideas can be more/less authoritative? He believes in authority?
Sensations are, then, anything but authoritative: they are themselves interpretations. They can be educated and refined. In this process they become more authoritative in the sense that they are better tested and educated but not in the sense that they are ever beyond error or improvement
Yeah he does. Not in the standard way, but still some. This is bad! Critical tests and knowledge creation are not about gaining authority. The Popperian approach is about *rejecting the goal of authority* and replacing it with a new approach!

I like Feynman. He rejects authority.
A presentationalist would hardly deny this; quite the contrary, if he knows his Kant, he understands these matters.
The part after the semicolon is condescending to people who haven't read Kant and it's unexplained. Should have been omitted.
One will hardly be inclined to treat
It goes: [argument] Therefore: One will hardly be inclined to treat

It should go: [argument] Therefore: one should not do X, Y, Z which contradict the conclusions of the argument.

Arguments don't depend for their power on what *people* are *inclined* to do or treat.



The paper goes on at length about various animals. It makes the point that different animals have different sense apparatus. This is kinda problematic for people who consider the *human* senses to be the sacred, authoritative version of reality. Actually different things about reality can be seen with differently designed eyes, or with microscopes.

It also says "combinatorialism" is about as big a problem as justificationism. And emergence/emergent-evolution is its opposite. Says something about combinationlism=reductionism too. I don't know what that is about, though of course reductionism is silly: we should operate at whatever level of explanation best solves our problems, not the lowest level possible.



So here's my version:

Presentationalism, instrumentalism, strong empiricism, Copenhagen interpretation of quantum physics, etc, say: do not explain. Just observe reality (perhaps predict, presumably using induction).

The right view is: do explain reality. That should be our goal. To *understand*. We should regard stuff to exist in reality if it's *needed* in our explanations. In other words, if we can't understand reality while denying X exists, then we should say X does exist.

Denying objective reality itself exists is quite a bad way to try to understand it, or anything!

And presentationalism/etc has no explanation of how knowledge is created to offer so it doesn't work as a rival theory to Popperian epistemology.

And authority is invalid. And there's no "raw" observations, sensory data is, and must be, interpreted.

Elliot Temple | Permalink | Messages (0)

Rationality

http://www.the-rathouse.com/bartagree.html
Following his teacher, Karl Popper, the operating principle of Bartley's rationalism is the formula 'I may be wrong and you may be right, and by means of critical discussion we may get nearer to the truth of the matter'.
Note that this conception of rationality is all about *how disagreements are treated*. It has an implicit "When we disagree, I may be wrong..." at the start.

Here is an equivalent statement of rationality:

Rationality is a property of how disagreements are treated, not which ideas one holds. Rational ways of approaching disagreements keep open the possibility of either party being mistaken, or both parties. Rational approaches are those that aim to eliminate errors. Irrational approaches presuppose a correct conclusion. They try to entrench it, or "make it rule". Aiming to convert people to your way of thinking is thus irrational, whereas aiming to discuss which way of thinking is true is rational.

Elliot Temple | Permalink | Messages (0)

Abduction

If abductive reasoning is about "inference to the best explanation" isn't that similar to the Popperian approach, which tries to find good explanations?

No, because:

abduction justifies the conclusion of the inference being the best explanation based on the process used to reach it (i.e., the conclusion is justified b/c it was reached using abduction rather than, say, guessing)

abduction, like induction, is supposed to offer a procedure for how to get from the input data to a conclusion, but actually doesn't.

critical rationalism (CR) doesn't need a procedure for how to create or pick explanations b/c it just says: guess them however you want, and if your method is dumb it doesn't really matter (but feel free to criticize your method and try to improve it).

The reason it doesn't matter to CR where ideas come from is CR doesn't try to justify ideas by having them come from an authoritative process. Instead, CR tries to improve ideas by *error elimination*. Although this does let us improve ideas, it never makes them authoritative or secure (or probably secure), as abduction aims to do.

Elliot Temple | Permalink | Messages (0)

Popper vs The World

Popper/Feynman/fallibilists/etc: Mistakes are very easy to make. In addition to imaginative conjectures, we need relentless criticism. If we don't have that criticism, we'll constantly be fooled by mistakes.

Others: Stop dismissing everything so easily. It's good enough. No one is going to be fooled unless they are an idiot. Imagination is more important than criticism. And anyway, the way to avoid mistakes is this: proven or supported ideas are not mistakes.

Elliot Temple | Permalink | Message (1)