Saturday, April 17, 2010

The End is Near

On the flight back from Monaco I read Bozo Sapiens between glasses of neat Scotch. I would read some, then I would stare out the window of the plane at an altitude of several miles and wonder how we (Humans) have managed to achieve anything. Then I would remember that we haven't.

The book points out carefully how nearly everything you ever do, or see anyone else doing, is no different from anything you might catch a monkey doing if you watched it long enough. Reading Bozo Sapiens puts you in pretty much the same mood as that Monkeysphere article that came out a long time ago - before Sarah and the contest and New York and the Project made it so my leisure time was rearranged and timeshared between a handful of personalities. The gist of the book is: you're forgiven - you're only human, and it's inevitable that you will exhibit these destructive tendencies. Don't worry about it. And: you're damned - you're only human, so ... well, you're only human.

I don't buy it.

It's really easy to be a cynic. Trust me. It's easy to look at civilization and think to yourself, "I didn't know they stacked garbage that high." I'd venture that if you haven't ever thought that, then you probably haven't seen enough of the world. But if that's where your thought process stops, well, you still haven't seen enough of the world.

It's not like there aren't people working toward the goal. And I mean this beyond Michio Kaku-esque platitudes , feel-good but totally innocent of content. There are people with ideas and direction.

Ben Goertzel may embody the best mixture of pragmatism and ambition. The project he manages, OpenCog, is an Open Source artificial intelligence project. The A.I. field has long been plagued by researchers looking for the Holy Grail, the single concept that is the Secret of intelligence. Goertzel seems to know that the secret to building a mind is already known by any infant human being - long, patient days or work and attention, structuring the thing layer by layer and training it painstakingly by experience. His weighted labeled hypergraph structure doesn't rely on any Deus Ex Machina trickery, it is merely a semantic web where the nodes and links contain semantic information. But if you look at the architecture of OpenCog you quickly see that a lot of what it is designed to do (and a lot of what humans do) is a lot more basic than concept formation and manipulation, and just as important to "intelligent" behavior. Dogs and chimps are fairly rotten at the algebra of "meaning" but still pretty good at solving basic practical problems and keeping themselves alive. This outline contains quite a bit of viewpoint-altering information about what our sensory modalities actually are and what they are good for in an A.I. context. My favorite example is the triangular lightbulb. You just imagined a triangular lightbulb when you read that sentence, even though you've probably never seen one before. You did it automatically, and I didn't even give you any instructions on what I meant. The layers between our "mind" and our senses do a lot of work for us of which we're not even aware.

This segues nicely into Geoffrey Hinton, whose wonderful Google talk showcases the closest thing I've witnessed to an example of a machine convincingly thinking. You can even duplicate this and play with it if you have Matlab.

I would be remiss to neglect mentioning Ray Kurzweil who has been popularizing the Singularity since way before it was cool. I haven't read so much Kurzweil since I stopped caring if anybody else was convinced about this stuff. I met a woman in a hotel bar in Santa Fe who argued with my for forty minutes that the Singularity didn't make any sense, no matter what logical approach vector I used to describe it. People have difficulty conceptualizing exponentials - grains of rice on chessboards and such. Didn't change things one way or the other that we made a "Singularity" in my hotel room later.

There are literally way too many authors cashing in on the same narrow band of popular futurist ideas (strong A.I., radical genetic alteration, advanced nanotechnology, make it "weird" and force it all through the die of an uncreative and half-baked plot) but I will unhesitatingly recommend Blindsight by Peter Watts. I feel like Blindsight is going to be the Snow Crash of the next thirty years. Or maybe just the next five years, what with exponential growth.

People have concrete plans to achieve our apotheosis. We are working toward it. Watch this space.

Lesson 1: On Rigor

It was a very long time before I saw my first "rigorous proof". Geometry in 8th grade purported to be a proof-based class, what with lists of theorems and lemmas, but it felt completely inorganic. There was no deep result made more meaningful by beautiful argument, no cleverness.

We literally listed theorems and lemmas on one side of the page, with definitions on the other.

In college, however, you very quickly run in to the type of people who swing entirely the other direction. They desire perfect rigor--infuriating pedantry that similarly detracts from what, in my opinion, makes mathematics so interesting. Does it matter if every line of a proof is written in flawless logical notation, upside-down As and backwards capital Es abound? Or is it more powerful to understand the proof itself, and to eschew formality in favor of clarity?

I would say that whatever path leads you to understand why a given statement is true, or why a proof actually is a proof is more useful--more powerful. I find that pedantry in notation obfuscates the truth of things, but I've known people that wouldn't have it any other way--they would literally fill chalkboards with excruciatingly detailed proofs of honestly very simple concepts.

Much of this is besides the point, though, because you probably don't even have a good idea of what perfect mathematical rigor even is. But don't worry, mathematicians didn't for a couple thousand years, either.


Lesson One: A Formula You Probably Memorized
A Preface to Infinite Series

Throwing aside rigor for the moment, it shouldn't be difficult to convince yourself there's an infinite number of colored triangles along the band in the image to the left.

There's an infinite number of purple-shaded triangles, and an infinite number of yellowish-orange-dirt colored triangles. I don't really know what to call either of those colors.

Regardless, the square's area is clearly a2. It follows from the formula for an area of a triangle (1/2 * base * height) that the bigger, white triangle has an area of (1/2) a2, and the smaller "big" white triangle has an area of (1/4) a2.

This leaves a total area of (1/4) a2 for the entire strip of infinitely many colored triangles.

Now consider the largest pair of colored triangles. One has an area of (1/2)*(1/2)*a*(1/2)*a, or (1/8)a2, and the other has an area of (1/2)*(1/2)*a*(1/4)*a, or (1/16)a2. The goofy looking quadrilateral formed by the union of two largest colored triangles then has an area of (3/16)a2.

Notice now that each triangle of the same color has roughly the same shape--their dimensions are just scaled down. Again, throwing aside rigor, it should not be difficult to convince yourself that successively smaller triangles differ in area by a factor of 1/4. If you're having difficulty seeing this, write out the areas for the first handful of triangles of either type (which isn't terribly rigorous), or simply notice that a triangle of a given size can be constructed by 4 triangles of the next smallest size, like so:

We can now construct an expression for the total area of any given pair of colored triangles.

It is simply (3/16)a2 * (1/4)n, where n=0 would correspond to the first pair of triangles, and so on.

If we wish to represent the area of that entire strip, it follows that we simply sum every single pair of colored triangles.

But we already know what this area is! We've constructed our strip such that it only takes of (1/4)a2 of area of our square.

This must mean that the sum

(3/16)a2 * (1/4)0 + (3/16)a2 * (1/4)1 + (3/16)a2 * (1/4)2 + (3/16)a2 * (1/4)3 + ... must equal (1/4)a2.

If you're clever, you'll notice the quantity (3/16)a2 is present in every term, so it can be factored out, leaving us with:

(3/16)a2 *[1 + (1/4)1 + (1/4)2 + (1/4)3 + ... ] = (1/4)a2

Which, by a little rearrangement, implies that:

[ 1 + (1/4)1 + (1/4)2 + (1/4)3 + ... ] = (1/4)a2 * (16/3) * (1/a2) = (4/3)

Now, the mathematician must ask, what have we actually accomplished? All we did was cut a square into a bunch of pieces, and subsequently show that the sum of these pieces was in fact the total area of the square, and then rearranged everything a little bit. Is this really surprising at all?

How about we take it a little bit further.

Consider, again, the infinite series:


[ 1 + (1/4)1 + (1/4)2 + (1/4)3 + ... ]

We've already demonstrated what it equals, but let's play with it a little bit. We know each successive term differs only by a factor of (1/4), just by inspection. Then, it follows that:

[ 1 + (1/4)1 + (1/4)2 + (1/4)3 + ... ] - (1/4) * [ 1 + (1/4)1 + (1/4)2 + (1/4)3 + ... ] =

[ 1 + (1/4)1 + (1/4)2 + (1/4)3 + ... ] - [ (1/4)1 + (1/4)2 + (1/4)3 + (1/4)4 + ... ] = 1

Well that's all well and good. But isn't this little relation true if we replace 4 with any natural number?

Let's see:

[ 1 + (1/m)1 + (1/m)2 + (1/m)3 + ... ] - (1/m) * [ 1 + (1/m)1 + (1/m)2 + (1/m)3 + ... ] = 1

This appears to be alright, as long as m behaves properly, but well get to that in a second. Our sum appears twice on the left hand side, so let's factor it:

[ 1 + (1/m)
1 + (1/m)2 + (1/m)3 + ... ] * (1 - (1/m)) = 1

And, by division:

[ 1 + (1/m)
1 + (1/m)2 + (1/m)3 + ... ] = 1 / (1 - (1/m))

Which might be more familiar as

[ 1 + (1/m)1 + (1/m)2 + (1/m)3 + ... ] = 1 / (1 - r)

Where r is just that common factor by which successive terms are related. This is just the formula for a geometric series, which we appear to have derived from an intuitive geometric argument, along with some algebraic hand-waving. What isn't clear from our argument is under what conditions this formula fails.

For instance, it doesn't really make sense that this formula would work if (1/m) where greater than or equal to one, because our infinite series would grow infinitely large, whereas our formula predicts it would it would either be a particularly infuriating indeterminate, 1/0, or it would be negative.

It's also unclear if this formula would still hold for negative numbers greater than -1 (it does), and if it would fail for negative numbers less than -1 (again, it does). Does it work for complex numbers as well? Interestingly enough, you might have heard the phrase "Radius of Convergence" with regards to when infinite series like our example above are convergent. This phrase doesn't make much sense when you're talking about Real numbers, because you can only really go two directions, more positive, or more negative. On the complex plane, however, the Radius of Convergence is actually a radius, centered around 0, and in this case, means that all complex numbers of modulus or "length" less than 1 satisfy our nifty little formula.

But, again, we haven't really proved why our formula works in the region that it does. It's just intuitively clear that it would work in that region, and not outside of it. Unfortunately, the machinery required to rigorously prove all of this would require several more pages of theorem proving, so we'll just put that off...for a while.

More interesting, to me at least, is that such a profound result can be constructed so simply. You don't need a command of any math higher than basic algebra. And if I have any single objective in writing about mathematics, it's to prove (there's that word again) that there is no topic in mathematics, physics, or anything really, that can't be explained in an intuitive and simple way.

Sunday, April 11, 2010

First

The conceit of the blogger is that there exist people on the internet that care about blogs.