Bitstring Compressibility

a Charles H Bennett paper mentioned (search for "simple counting argument". it's on page 3) that there's a counting argument that most strings can't be compressed, or can only be compressed a little bit. Something like that (from memory). He didn't explain it.

So I thought about what it is.

Consider a bitstring of length 50. What does compressing it mean? It means you have some shorter bitstring which maps to it.

Suppose you were going to compress it to a length 40 bitstring. Could you compress each length 50 bitstring to a length 40 bitstring to save 10 bits of space? No because there are 1024 times more length 50 bitstrings than length 40 bitstrings. So each length 40 bitstring has to, on average, be mapped to 1024 length 50 bitstrings. So by removing 10 bits you're also losing 10 bits of information (you're specifying the target with 1/1024 ambiguity). Given a 40 bit string, you don't know WHICH 50 bit string it's shorthand for b/c there are 1024 it could be shorthand for, if you use a method to compress ALL the 50 bit strings uniformly (for example, by simply cutting off the last 10 bits. or you could cut off the first 10 bits or some other selection of 10 bits that don't even have to be in a row).

so at most we could compress 1/1024th of all 50-bit strings to 40-bit strings. that's a theoretical limit otherwise you have multiple 50-bit strings that you're trying to compress to the same 40-bit string.

you can take a 40-bit string, compress 2 50-bit strings to it, and then add an extra bit saying which one of the multiple mapped strings you mean. like dealing with hash table collisions with a linked list. but that's stupid in this case. it makes more sense to just compress to a 41-bit string. (maybe in some more complex approach it's useful? but it won't fundamentally change the conclusions.)

so what if we were to set up a compression algorithm which uses ALL smaller strings to compress to? we can compress to a 1 bit string, a 2 bit string, etc, all the way up to 49 bits.

well there's half as many 49 bit strings as 50 bit, and there's a quarter as many 48bit strings as 50bit, and an eighth as many 47 bit strings as 50bit, etc. so 1/2 + 1/4 + 1/8 ... = 1. so ALL the prior bitstrings have exactly the right number of possibilities. roughly. but not quite:

a 2-bit string has 4 possibilities. a 1-bit string has 2 possibilities. not enough. even if we count 1 more for a 0-bit string we're still 1 short.

so at least 1 (2 if we can't use a 0bit string) of the 50bit strings can't be compressed.

also, how many can be compressed a lot? well half of them are only getting compressed 1 bit. 1/4 are getting compressed 2 bits. so if we define "a lot" as more than N bits, then it's only 2^-N which get compressed a lot. like if you want 5+ bits of compression, then it's only 1 out of 32 bitstrings that can be compressed that much.

so significant compression is sparse/rare. this is important b/c the field defines random by the content (uncompressible) rather than by the source (how you generated the data). which is interesting b/c it's a "content not source" thing like Critical Rationalism cares about, and it's also not how common sense thinks of random data. common sense says if you flip a coin a bunch, whatever you get is random. but computation theory stuff says if you get 50 tails and that's it, that sequence isn't random. but MOST sequences of randomly generated coinflips will be random (mostly uncompressible) sequences because that's how most sequences are as i was just proving (e.g. at best only 1 in 32 sequences is compressible by 5+ bits).

there's another issue i think is interesting. what if you make 1024 different compression algorithms? each one could compress a different 1/1024th of all the 50bit strings to 40bit strings. however, the overhead for specifying which of the 1024 compression algorithms a particular 40-bit number was encoded with would be 10 bits b/c you need to specify a number from 1-1024 to say which compression algorithm it was. so you don't gain anything. this kinda of overhead is an important issue which gets in the way of attempts to cheat.


Elliot Temple | Permalink | Messages (0)

Brains Are Computers

Adding 5+5 is an example of a computation. Why? By definition. "Computation" is a word which refers to calculating stuff like sums, matrix multiplication, binary logic operations, derivatives, etc.

Are all computations math? Sorta. Consider this computation:

(concatenate "cat" "dog")

Which outputs: "catdog"

The format I used is: (function data-1 data-2). That's the best format programmers have invented so far. There can be any number of pieces of data including zero. Quotes indicate a string. And for data you can also put a function which returns data. That's nesting, e.g:

(concatenate "meow" (concatenate "cat" "dog"))

Which outputs: "meowcatdog"

Is that math? It can be done by math. E.g. you can assign each letter a number and manipulate lists of numbers, which is what a Mac or PC would do to deal with this. If you're interested in this topic, you might like reading Godel, Escher, Bach which discusses numeric representations.

But a human might calculate string concatenation in a different way, e.g. by writing each string on a piece of paper and then computing concatenations by taping pieces of paper together.

Humans have a lot of ways to do sums too. E.g. you can compute 5+5 using groups of marbles. If you want to know more about this, you should read David Deutsch's discussion of roman numerals in The Beginning of Infinity, as well as the rest of his books.

Moving on, computation is sorta like math but not exactly. You can think of computation as math or stuff that could be done with math.

A computer is a physical object which can do computations.

We can see that human intelligence involves computation because I can ask you "what is 5+5?" and you can tell me without even using a tool like a calculator. You can do it mentally. So either brains are computers or brains contain computers plus something else. There has to be a computer there somewhere because anything that can add 5+5 is a computer.

But we don't really care about an object which can add 5+5 but which can't compute anything else.

We're interested in computers which can do many different computations. Add lots of different numbers, multiply any matrices, find primes, and even do whatever math or math-equivalent it takes to write and send emails!

We want a general purpose computer. And human intelligence has that too. Humans can mentally compute all sorts of stuff like integrals, factoring, finding the area of shapes, or logic operations like AND, NOT, OR, XOR.

When we say "computer" we normally refer to general purpose computers. Specifically, universal classical computers.

A universal computer is a computer than can compute anything that can be computed. "Classical" refers to computers which don't use quantum physics. Quantum physics allows some additional computations if you build a special quantum computer.

A universal computer sounds really amazing and difficult to create. It sounds really special. But there's something really interesting. All general purpose computers are universal. It only takes a tiny bit of basic functionality to reach universality.

Every iPhone, Android phone, Mac, or PC is a universal computer. Even microwaves and dishwashers use universal computers to control them. The computer in a microwave can do any computation that a $100,000 supercomputer can do. (The microwave computer would take longer and you'd have to plug in extra disks periodically for computations that deal with a lot of data.)

All it takes to be a universal computer is being able to compute one single function: NAND. NAND takes two inputs, each of which is a 1 or 0, and it computes one output, a 1 or 0. NAND stands for "not and" and the rule is: return a 1 if not both inputs are 1.

That's it. You can use NAND to do addition, matrix multiplication, and send emails. You just have to build up the complexity step by step.

There are many other ways to achieve universality. For example, a computer which can compute AND and NOT, individually, is also universal. Being able to do NOT and OR also works. (Again these are simple functions which only have 1's and 0's as inputs and outputs.) If you want to see how they work, there are "truth tables" here which show lists of what the outputs are for all possible inputs: Wikipedia Link.

We can see that the computer aspect of humans is universal because humans can mentally compute NAND, AND and NOT. That's more than enough to indicate universal computation.

To make this more concrete, you can ask me what (AND 1 1) is and I can tell you 1. You can ask me (NOT 0) and I can tell you 1. You can ask me (NAND 1 1) and I can tell you 0. I can do that in my head, no problem. You could too (at least if you learned how). You're capable.

So human thinking works by either:

  1. Universal classical computation; or

  2. Universal classical computation as well as something else.

I don't think there's a something else because there's nothing humans do, think, say, etc, which requires something else to explain how it's possible. And because no one has proposed any something else that makes sense. I don't believe in magical souls, and I'm certainly not going to start believing in them in order to say, "Humans have a universal classical computer plus a soul, which lets them do exactly the same things a universal classical computer with no soul can do.". That'd be silly. And I don't think an iPhone has a soul in the silicon.

The brains of dogs, cats, parrots and monkeys are also universal classical computers. Remember, that's a low bar. It's actually really hard to make a computer do much of anything without making it universal. You can read about Universal Cellular Automata and how little it takes to get universality if you're interested. How easy universality is to achieve, and how there's an abrupt jump to it (rather than there being half-universal computers) is also explained in The Beginning of Infinity.

I won't go into arguing that cat brains are universal computers here. What I will say, briefly, is in what way humans are different than cats. It's kinda like how a PC is different than an iPhone. It has a different operating system and different apps. That's the basic difference between a person and a cat: different software. The hardware is different too, but the hardware fundamentally has the same capabilities, just like iPhones and PCs have different hardware with the same fundamental capabilities: they can do exactly the same computations. Humans have intelligence software of some sort – software which does intelligent thinking. Cats don't.


Elliot Temple | Permalink | Messages (5)