Popular Posts

Sunday, February 9, 2014

Kurzweil Can See the Future

This dude I love.  He says:  "I do expect that full MNT (nanotech) will emerge prior to Strong AI, but only by a few years (around 2025 for nanotechnology, around 2029 for strong AI).

Sweet!  This guy's got it figured.  I didn't know that humans could predict the future, but turns out that, armed with the Law of Accelerating Returns (a law Kurzweil made up, in essence, to describe how technological innovations are coming more rapidly today than they did, say, at the time of the invention of the printing press), we can predict the future of technology.  Of course, we've been morons about the future of tech until Kurzweil, but don't fret about a track record, as he's rewriting the rules here.  His predictions are scientific.

I envision all this in a used car type of advertisement:

Want Strong AI?  No problem.  That's 2029!!!  Nanotech?  Look no further.  That's 2025.

Dude, seriously.  If you know when an innovation will happen, you know enough about the innovation to make it happen today.  The philosopher Karl Popper pointed this out years ago, that predicting technological innovation amounts to knowing the innovation, which amounts to already knowing how to do it today.  Hence, the whole prediction of inventions is bogus.   Listen up, Kurzweil.  You're silly made-up laws about the exponential rate of technological change don't tell us what technologies are coming.  At best, they only tell us that new tech will keep popping up, and the gap between old tech and new tech will keep getting smaller.  That quantitative trend itself will likely change (say, because the world changes radically in some other way, or who knows).  But what we can say for certain is that the qualitative aspect--what technology is next--is outside the law of accelerating returns and outside prediction generally.

Sorry, Kurzweil.  But nice book sales.



Wednesday, February 5, 2014

Information Terms

I'll use Kurzweil again as I find that he's a spokesperson for the latest sci-fi thinking on smart machines and the like.  He does his homework, I mean, so when he draws all the wrong conclusions he does it with impressive command of facts.  He's also got an unapologetic vision, and he articulates it in his books in a way that critics and enthusiasts alike really know where he's coming from.  I like the guy, really.  He's just wrong.

For example, in his eminently skimmable "The Singularity is Near", he quips on page who-cares that the project of Strong AI is to reverse engineer the human brain in "information terms."  What is this?  Everything is information these days, but the problem with seeing the world through the lens of "information" or even "information theory" is that it's just a theory about transmitting bits (or, "yes/no"s).  Then, computation is just processing bits (which is really what a Turing Machine does, is traverse a graph with a deterministic, discreet decision at each node), and communications is just, well, communicating them.  But information in this sense is just a way of seeing process discretely.  You can then build a mathematics around it and processes like communication can be handled in terms of "throughput" (of bits) and "loss" (of bits) and compression and so on.  Nothing about this is "smart" or should even really generate a lot of excitement about intelligence.  It's a way of packaging up processes so we can handle them.  But intelligence isn't really a "process" in this boring, deterministic way, and so we shouldn't expect "information terms" to shed a bunch of theoretical light on it.

Intelligence is about skipping all those "yes/no" decisions and intuitively reaching a conclusion from background facts or knowledge in a context.  It's sort of anti-information terms, really.  Or to put it more correctly, after intelligence has reached its conclusions, we can view what happened as a process, discretize the process, graph it in "information terms" and wallah!, we've got something described in information terms.

So my gripe here is that "information" may be a ground breaking way of understanding processes and even of expressing results from science (e.g., thermodynamics, or entropy, or quantum limitations, or what have you), but it's not in the drivers seat for intelligence, properly construed.  Saying we're reverse engineering the brain is a nice buzz-phrase for doing some very mysterious thinking about thinking; saying "oh, and we're doing it in information terms" doesn't really add much.  In fact, whenever we have a theory of intelligence (whatever that might look like, who knows?), we can be pretty confident that there'll be some way of fitting it into an information terms framework.  My point here is that it's small solace for finding that illusive theory in the first place.

Shannon himeself--the pioneer of information theory (Hurray!  Boo!)--bluntly dismissed any mystery when formalizing the theory, saying in effect that we should ignore what happens with the sender and receiver, and how it gets translated into meaning and so on.  This is the "hard problem" of information--how we make meaning in our brains out of mindless bits.  That problem is not illuminated by formalizing the transmission of bits in purely physical terms between sender and receiver.  As Shannon knew, drawing the boundary at the hard problem meant he could make progress on the easier parts.  And so it is with science when it comes face to face with the mysteries of mind.  Ray, buddy, you're glossing it all with your information terms.  But then, maybe you have to, to have anything smart sounding to say at all.


Kurzweil's Confusion

The real mystery about intelligence is how the human brain manages to do so much, with so little.  As Kurzweil himself notes, the human brain "uses a very inefficient electrochemical, digital-controlled analog computational process.  The bulk of its calculations are carried out in the interneuronal connections at a speed of only about two hundred calculations per second ((in each connection), which is at least one million times slower than contemporary electronic circuits."

Kurzweil is making a case for the mystery of human intelligence.   Whenever the human brain, when viewed as a computational device, comes up so short, what needs to be explained then is how our vast superiority intelligent thinking is possible.  The more a purely computational comparison shows brains as inferior to computation, the more computation itself seems a poor model for intelligence.

When supercomputers like Crey's Jaguar achieve petaFLOP performance (a million billion floating point operations per second), and we still can't point to anything intuitive or insightful or human-like that they can do (like understand natural language), it suggests pretty strongly that brute computational power is not a good measure of intelligence.  In fact, Kurzweil himself makes this point pretty well, though of course it's not his intent.  To put it another way, when everything is computational speed, and humans lose the game, then true intelligence is clearly not computational speed.

To put it yet another way, the slower and crappier our "architecture" is when viewed as a glorified computer, the more impressive our actual intelligence is--and of course, the more the very notion of "intelligence" is manifestly not analyzable by computational means.

So much for Moore's Law leading us to Artificial Intelligence.  Next thought?