omg, opt-esc, global hotkey for cocoa text fields to try to complete the current word, try it!
there's also tons of other stuff like ctrl-k and ctrl-y (like cut/paste, but with a separate buffer, and cuts to end of line instead of using a selection). or ctrl-a and ctrl-e (move cursor to start/end of paragraph). and you can also make your own hotkeys including ones to do multiple things, like i made one to duplicate the current line by piecing together several commands. they also allow hotkey sequences as triggers.
i also just found out about the esc based shortcuts in terminal (press esc, let go, then hit key). esc-d, esc-delete, esc-f, esc-b :-D (those are forward-delete-word, backward-delete-word, forward word, and backward word)
apple does a great job with details like this, when they try. i hope OS X gets more love soon (though i accept that iphone os is more important to their business atm)
# in terminal
Elliot Temple at 2:36 PM
on June 2, 2010 | Permalink
| Comments (0)
From an email thread about free will:
Once upon a time (624 BC) Thales was born. Thus began philosophy.
Thales invented criticism. Instead of telling his followers what to believe, he made suggestions, and asked that they think for themselves and form their own ideas.
A little later, Xenophanes invented fallibility and the idea of seeking the truth to improve our knowledge without finding the final truth. He also identified and criticized parochialism.
In the tradition of Thales and Xenophanes came Socrates, the man who was wise for admitting his vast ignorance (among other things).
But only two generations after Socrates, philosophy was changed dramatically by Aristotle. Aristotle invented justificationism which has been the dominant school of philosophy since, and which opposes the critical, fallibilist philosophies which preceded him (and which were revived by Popper and Deutsch).
Aristotle's way of thinking had some major strands such as:
1) he wanted episteme -- objectively true knowledge.
2) he wanted to guarantee that he really had episteme -- he wanted justified, true knowledge. he rejected doxa (conjecture).
3) he thought he had episteme -- he was "the man who knows"
4) he thought he had justification
5) in relation to this, he invented induction as a method of justifying knowledge
Thus Aristotle rejected the fallibilist, uncertain ethos of striving to improve that preceded him, and replaced it with an authoritarian approach seeking guarantees and to establish existing knowledge against doubt.
Induction, as well as all other attempts, were unable to justify knowledge. Nothing can guarantee that some idea is episteme, so all attempts to do it failed.
Much later, Bacon attached induction to science and empiricism. And some people like Hume noticed it didn't work. But they didn't know what to do without it because they were still focussed on the same problem situation Aristotle had laid out: that we should justify our knowledge and find guarantees. So without induction they still had to figure out how to do that, and salvaging induction seemed easier than starting over. Hence the persistent interest in reviving induction.
What Popper did is go back to the old pre-Aristotle philosophical tradition which favors criticism and fallibilism, and which has no need for justification. Popper accepted that doxa (conjectures) have value, as Xenophanes had, and he explained how we can improve our knowledge without justification. He also refuted a bunch of justificationist ideas.
Then David Deutsch wrote "A Conversation About Justification" in _The Fabric of Reality_.
So how does that relate to free will? The basic argument against free will goes like this, "There is no way to justify free will, or guarantee it exists, therefore it's nonsense." The primary argument against free will is nothing but a demand for justification in the Aristotelian style.
As an example, one might say free will is nothing but a conjecture without an empirical evidence. To translate, that means free will is merely doxa, and hasn't got any empirical justification. This is essentially true, but not actually a problem.
Arguments against free will take many guises, but justificationist thinking is the basic theme giving them appeal.
Elliot Temple at 8:02 PM
on May 5, 2010 | Permalink
| Comments (12)
I made a new philosophy website.
It won't look right in Internet Explorer.
Elliot Temple at 1:08 AM
on April 25, 2010 | Permalink
| Comments (6)
Organizing code is an instance of organizing knowledge. Concepts like being clear, and putting things in sections, apply to programming and philosophy both.
DRY and YAGNI aren't just programming principles. They also apply to thinking and knowledge in general. It's better to recognize a general case, then think of several cases in separate, repetitive ways. And it's better to come up with solutions for actual problems and not create a bunch of over-engineered theory that may never have a purpose.
The programming methodology of starting with the minimum thing that will work, and then making lots of little improvements to it until its awesome -- based in part on actual feedback and experience with the early versions -- is also a good general method of thinking connected to gradualist, evolutionary epistemology. It's also how, say, political change should be done: don't design a utopia and then try to implement it (like the French Revolution), instead look for smaller steps so it's possible to change course mid way once you learn more about it, so you get some immediate benefit and to reduce risk.
Programmers sometimes write articles about how evil rewrites are, and how they lead to vaporware. Nothing is ever perfect, but existing products have a lot of useful work put into them, so don't start over (you'll inevitably run into new, unforeseen problems) but instead try to improve what you have. Similarly, philosophically, there are three broad schools of thought:
1) the conservative approach where you try to prevent any changes.
2) the liberal approach where you try to improve what you have.
3) the radical approach, where you say existing ideas/knowledge/traditions are broken and worthless, and should be tossed out and recreated from scratch.
The liberal, non-revolutionary approach is the right one not just for code rewrites but also in philosophy in general (and in politics).
Consider two black boxes which take input and give output according to some unknown code inside. You try them out, and both boxes give identical output for all possible inputs. You wonder: are the boxes identical? Are they the same, for all practical intents and purposes? Must they even be similar?
Programmers, although they don't usually think about it this way, already know the answer. Code can be messy, disorganized, and unreadable, or not. Code can have helpful comments, or not. One can spend a day refactoring or deleting code, and make sure all the tests pass, so it does exactly the same thing as before, but now it's better. Some code can be reused in other projects, and some isn't set up for that. Some code has tests, and some doesn't. One box could be written in C, and another in lisp.
None of these things matter if you only treat code as a black box and just want to use it. But if you ever have to change the code, like adding new features, doing maintenance or doing bug fixes, then all these differences which don't affect the code's output are important.
I call what the code actually does its "denotation" and the other aspects its "structure", and I call this field structural epistemology. Programming is the best example of where it comes up, but it also has more philosophical relevance. One interesting question is if/how/why evolution creates good structure in genetic code (I think it does, but I'm not so clear on what selection pressure caused it). Another example is that factories have knowledge structure issues: you can have two factories both making toys, with the same daily output, but one is designed so it's easier to convert it to a car factory later.
Elliot Temple at 6:40 PM
on April 22, 2010 | Permalink
| Comments (7)
Liberalism in the Classical Tradition
by Ludwig von Mises, p 51
Repression by brute force is always a confession of the inability to make use of the better weapons of the intellect
This is similar to Godwin:
If he who employs coercion against me could mould me to his purposes by argument, no doubt he would. He pretends to punish me because his argument is strong; but he really punishes me because his argument is weak.
Elliot Temple at 1:41 PM
on April 16, 2010 | Permalink
| Comments (0)
Now you know.
In the interview, he expresses disagreement with Ayn Rand and her view that the State is bad because it uses force against its citizens. He does not provide any argument that she's mistaken, or that his view is better.
Milton also, for example, advocated a negative income tax. That means if you contribute a sufficiently small amount to the economy then the State takes money by force from other citizens and gives it to you.
The purpose of this post is simply to inform people about how a libertarian icon is a blatant Statist. (And, by the way, he's not the only one.)
Elliot Temple at 7:21 PM
on April 9, 2010 | Permalink
| Comments (3)
The Retreat To Commitment, by William Warren Bartley III, p 123:
There may, of course, be other nonlogical considerations which lead one to grant that it would be pointless to hold some particular view as being open to criticism. It would, for instance, be a bit silly for me to maintain that I held some statements that I might make—e.g., "I am over two years old"—open to criticism and revision.
Yet the fact that some statements are in some sense like this "beyond criticism" is irrelevant to our problems of relativism, fideism, and scepticism.
The claim that some statements are beyond criticism is anti-fallibilist and anti-Popperian.
It is not at all silly to maintain that the example statement is open to criticism. It's essential. Not doing so would be deeply irrational. We can make mistakes, and denying that has consequences
, e.g. we'll wonder: how do we know which things we can't be mistaken about? And that question begs for an authoritarian, as well as false, answer.
You may be thinking, "Yes, Elliot, but you are over two years old, and we both know it, and you can't think of a single way that might be false." But I can.
For example, my understanding of time could contain a mistake. Is that a ridiculous possibility? It is not. Most people today have large mistakes in their understanding of time (and of space)! Einstein and other physicists discovered that and space are connected and it's weird and doesn't follow common sense. For example, the common sense concept of two things happening simultaneously at different places is a mistake: what appears simultaneous actually depends where you watch from. If some common sense notions of time can be mistaken, why laugh off the possibility that our way of keeping track of how much time has passed contains a mistake?
Another issue is when you start counting. At conception? Most people would say at birth. But why birth? Maybe we should start counting from the time Bartley was a person
. That may have been before or after birth. According to many people, brain development doesn't finish until age 20 or so. In that case, a 21 year old might only have been a full person for one year.
Of course there are plenty of other ways the statement could be mistaken. We must keep an open mind to them so that when someone has a new, counter-intuitive idea we don't just laugh at him but listen. Sure the guy might be a crank, but if we ignore all such ideas that will include the good ones.
Elliot Temple at 6:10 PM
on March 12, 2010 | Permalink
| Comments (38)
X is a good trait. A has more of X than B does. Therefore A is better than B.
That is a non sequitur.
You can add, "All other things being equal" and it's still a non sequitur.
X being a good or desirable trait does not mean all things with more X are better. There being all sorts of reasons X is amazing does not mean X is amazing in all contexts and in relation to all problems.
You'd need to say X is universally good, and all other things are equal. In other words, you're saying the only difference between A and B is amount of something that is always good. With the premises that strong, then the claim works. However, it's now highly unrealistic.
It's hard to find things that are universally good to have more of. Any medicine or food will kill you if you overdose enough. Too much money would crush us all, or can get you mugged. An iPhone is amazing, but an iPhone that's found by a hostage taker who previously asked for everyone's phones can get you killed.
You can try claims like "more virtue is universally good". That is true enough, but that's because the word "virtue" is itself already context sensitive. It's also basically a tautology and immune to criticism, because whatever is good to do is what's virtuous to do. And it's controversial how to act virtuously or judge virtue. If you try to get specific like, "helping the needy is universally good," then you run into the problem that it's false. For example, if Obama spent too much time working in soup kitchens, that wouldn't leave him enough time to run the country well, so it'd turn out badly.
You could try "more error correction is a universal good thing" but that's false too. Some things are good enough, and more error correction would be an inefficient use of effort.
You might try to rescue things by saying, "X is good in some contexts, and this is one of those contexts." Then you'll need to give a fallible argument for that. That is an improvement on the original approach.
Now for the other premise, "all other things being equal." They never are. Life is complicated and there's almost always dozens of relevant factors. Even if they were equal, we wouldn't know it, because we can never observe all other things to check for their equality. We could guess they are equal, which would hold if we didn't miss anything. But the premise "all other things being equal, unless I think of some possible relevant factor" isn't so impressive. You might as well just say directly, "A is better than B, unless I'm mistaken."
Elliot Temple at 4:22 PM
on March 12, 2010 | Permalink
| Comment (1)
People commonly say things like, "That's a good point, but alone it's insffucient for me to change my position."
In a debate club meeting, or a Presidential debate, most of the non-partisan audience usually comes away thinking both sides made some good points.
Debaters think an idea can suffer a few setbacks, but still be a good idea. They aren't after perfection but just trying to get the better of their debating opponent.
These are examples of the same mistake underlying critical preferences: simultaneously accepting two conflicting ideas (such as a position, and a criticism of that position).
PS Notice that "simultaneously accepting two conflicting ideas (and making a decision about the issue)" would be a passable definition of coercion for TCS to use. This highlights the connection between coercion and epistemology. The concept of coercion in TCS is about when rational processes in a mind break down. The TCS theory of coercion tries to answer questions like: What happens then? (Suffering; a big mess.) What causes the breakdown to happen? (Different parts of the mind in conflict and the failure to resolve this by creating one single idea of how to proceed.) What's a description of what the mind looks like when it happens? (It contains conflicting, active theories.)
Elliot Temple at 4:20 PM
on March 10, 2010 | Permalink
| Comments (0)
T1 is a testable, scientific theory to solve problem P. T2 is a significantly less testable theory to solve P. In Popper's view, barring some important other consideration, if both T1 and T2 are non-refuted then we must prefer T1 and say it's better.
But T1 might not be better. You could easily choose T1 so it's false and T2 so it's true as best we know today, without contradicting the situation description.
You can assert that T1 is better, as far as we know, given the current state of knowledge. But is it? Where is the argument that it is? This looks to me like both explanationless philosophy and positive philosophy (T1 is supported by its testability, and T2 isn't). T2 is losing out without any criticism of it.
What we should do is not say T1 is better, but say: T2 needs to be testable to be a viable theory because X. X can be a generic reason such as scientific theories should be testable and P is a scientific problem. Once we say this, we are now making a criticial argument: we're criticizing T2. This offers T2 the chance to defend itself, which never came up in the original analysis.
It's now up to T2 to offer a reason that it doesn't need to be more testable, or actually is more testable. T2 can criticize the criticism of it, or be refuted. (BTW if T2 didn't already contain this reason, and it has to be invented, then T2 is refuted and T2b is now standing, where T2b consists of the content of T2 plus the new content that criticizes this criticism of T2.)
Then if the testibility criticism is criticized, it can either be refuted or be ammended to include a criticism of that criticism. And so on. This approach takes seriously the idea that we only learn from criticism. That makes sense because criticisms are error-correcting statements: they explain a flaw in something, which helps us avoid a mistake.
Elliot Temple at 8:59 AM
on March 9, 2010 | Permalink
| Comment (1)