Friday, April 18, 2014

Rethinking Technological Determinism

Consider the case of Albert Einstein.  His now famous paper on Special Relativity, published in 1905, had such a seismic impact on our understanding of physics that it--along with his theory of General Relativity--eventually razed the entire edifice of classical mechanics, dating back to that other genius, Isaac Newton.  Relativity, as we now know, replaced Newton's laws governing objects traveling at constant velocity (special relativity) and undergoing acceleration (general relativity).  Back to 1905, in what's come to be called the "Anna mirablis", or "miracle year", Einstein published fully three other groundbreaking papers before the year's end, as well.  Taken together, they rewrote out basic ideas about the fundamental workings of the physical universe.  What a year it was.  Working at a patent office and without "easy access to a complete set of scientific reference materials", and additionally burdened by a dearth of "available scientific colleagues to discuss his theories", Einstein nonetheless plodded along in what today would be an almost ludicrously information-poor environment -- no Google scholar, no Facebook, no Twitter feeds--, publishing his landmark theories on the photoelectric effect, Brownian motion, mass-energy equivalence, and special relativity.  Collectively, the papers were to form the two central pillars of the New Physics:  Relativity and Quantum Theory.  

How did he do this, with so little to work with?  Contrast the case of Einstein--lowly Einstein, sitting in the patent office, cut off from vast repositories of other thinkers' theories-with much of the discussion these days about smarts and information.   I call today the age of "hyper-information."  Much of the hyper-information buzz comes out of Silicon Valley and is (of course) tied to "the Internet."  The Internet-visionaries (and here "Internet" as a "thing" is itself misleading, but this is another discussion) believe that we're getting "smarter", and technology itself is imbued with "smarts" and gets smarter, and that the world is changing for the better, and rapidly.   It's progress, everywhere.  The implication is that everyone will be an Einstein soon (or at least their best, most informed selves), and even our machines will be Einstein's too.  Kevin Kelly (of Wired fame) writes about the "intelligenization" of our world, and the famous (infamous?) entrepreneur/visionary Ray Kurzweil explains how we're even getting smarter in a quantifiable, measurable way:  we can graph innovation and intellectual endeavor as an exponential curve, the "Law of Accelerated Returns."  All this exciting talk, and we might be tempted to think Einstein's are springing up everywhere these days, too.  But no, unless I'm missing something.  On might be tempted, sure, but a cursory tour through today's "Internet" culture will quickly disabuse one of this notion (Tweet, anyone?).

But why not?  Why not truly big ideas, springing up everywhere, with all this super-information-guided-intelligence in the air?  Indeed, why do we still talk about "Einstein" at all?  (He's so, 1905.)  There's an easy answer, namely that "Einstein's don't come along too often in any era, so we'll just have to wait."  It's a good one, I think, but it's an uncomfortable fit with the exponential curve rhetoric about our rapidly "intelligenized" (that's not even a word!), hyper-informed world.  (Nassem Taleb, author of The Black Swan, suggested that we'd make better predictions in economics if we stopped reading all the "just-in" news sources.)  "Well, just how soon can we expect another Einstein-type genius?  We have problems to solve, after all!  Is he right around the corner, in a patent office (or maybe he's a Sys Admin), as yet undiscovered?"  

In fact the silliness of this inquiry belies an underlying theme that I think is a bit more troubling (and serious).  For, many of the Technorati--the digital visionaries and know-it-alls of all things technological--don't really want to discuss individual genius anymore.  The game has changed, and it's telling what's been left out of the new one.  Kelly goes so far to explain that "individual innovation" is a myth; what we call innovation is really the deterministic evolution of ideas and technology--it's predictable.  Whereas philosopher of science Karl Popper argued that technological innovation is intrinsically unpredictable (as is economic or social prediction), inventor Ray Kurzweil puts it on a graph.  And Kurzweil argues that it's not just the pace of technological change, but the nature of the innovations themselves that we can predict.  We know, for instance, that computers will be smart enough by 2040 (give or take) to surpass human intelligence ("intelligence" is always left poorly or un-analyzed, of course, and it's a safe bet that what's ignored is whatever Einstein was doing).  From there, we can pass the torch for innovating and theorizing to machines, which will crank out future "Einsteins", as machines will be smarter than all of the humans on the planet combined (Kurzweil really says this).  In other words, our "hyper-information" fetish these days is also a deliberate refocus away from singular human brilliance to machines, trends, and quantification.  Worrying about Einstein is yesterday's game; today, we can be assured that the world will yield up predictable genius in the form of the smart technology of the future, and from there, we can predict that superintelligent solutions and ideas will emerge, answer any pressing problems that remain.

But is this the way to go?  Is it just me, or does this all sound fantastical, technocratically controlling (surely not an "Einstein" virtue), and well, just plain lazy and far-fetched?  Consider:  (1) where is the actual evidence for the "intelligenization" of everything, as Kelly puts it?  What do we even mean here by intelligence?  We need need good ideas, not just more information, so how do we know that all of our information is really leading to "smarter"?  (Sounds like a sucker's argument, on its face.)  (2)  As a corollary to (1):  in a world where we are quantifying and measuring and tracking and searching everything at the level of facts, not ideas, who has time to do any real thinking these days, anyway?  Here we have the tie-in to Kelly and his "intelligenization" meme:  how convenient to simply pronounce all the gadgets around us as themselves intelligent, and thereby obviate or vitiate the gut feeling that thinking isn't happening in ourselves as much.  So what that we're all so un-Einstein like and acting like machines?  Smarts are now in our devices, thank God.  (And, murmured perhaps: "This has to rub off on us eventually, right?")  And finally (3), is automation--that is, computation--the sort of "thinking" we need more, or less, of today?  Might computation sometimes--occasionally, maybe frequently--in fact be at odds with the kind of silent, unperturbed contemplation that Einstein was doing, back in 1905, with his information-poor environment (even by the standards of his day)?  Could it be?

In fact history is replete with examples such as Einsteins, all suggesting that the "hyper-information" modern mindset is apples-and-oranges with the theoretical advances we require in human societies to make large, conceptual leaps forward, embracing new, hitherto unforeseen possibilities.  In astronomy, for instance, the flawed Ptolemaic models using perfect circles and devices like epicycles, equants, or [] were maintained and defended with vast quantities of astronomical data about the positions of planets (mainly).  No one bothered to wonder whether the model was itself wrong, until Copernicus came along.  Copernicus wasn't a "data collector" but was infused with an almost religious passion that a heliocentric model would simplify all the required fudges and calculations and data manipulations of the older Ptolemaic systems.  Like Einstein, his idea was deep and profound, but it wasn't necessarily information-centric, in today's sense.  The complex calculations required by Ptolemaic astronomers were impressive at the shallow level of data or information manipulation (as orbits were actually ellipses, not perfect circles, the numerical calculations required to predict movements of the known planets were extremely complex), but they missed the bigger picture, because the underlying conceptual model was itself incorrect.  It took someone sitting outside of all that data (though he surely was knowledgeable about the Ptolemaic model itself) to have the insight that led to the (eponymous) Copernican Revolution.

Similarly with Newton, who was semi-sequestered when working out the principles of his mechanics (Newton's three laws including his law of universal gravitation).  Likewise with Galileo, who broke with the Scholastic tradition that dominated 16th century Europe at the time, and was thinking about Democritus, an almost forgotten pre-socratic philosopher that predated Aristotle and had a negligible role in the dominant intellectual discussions of the day. []

Fast forward to today.  The current fascination with attempting to substitute information for genuine knowledge or insight fits squarely in the Enlightenment framework promulgated by later thinkers like Condorcet, and others.  In fact, while true geniuses like Copernicus or Galileo or Newton gave us the conceptual foundations (and one must include Descartes here, of course) for an entire worldview shift, the Enlightenment philosophes immediately began constructing a shallower rendition of these insights in the Quantifiable Culture type of thinking that reduced complex, human phenomena to quantifiable manipulations and measurements (can someone say, "mobile phone apps"?).  Whereas Copernicus was infused with the power and awe of a heliocentric universe (and the Sun itself was an ineffable Neo-Platonic symbol of divinity and intelligence), the Condorcet's of the world became enamored with the power and control that quantifying everything could confer upon us, at the level of society and culture.  And today, where a genius such as Alan Turing offered us insights about computable functions--and also proved the limits of these functions, ironically, even as he clarified what we mean by the "computation"--we've translated his basic conceptual advances into another version of the shallow thesis that culture and humanity is ultimately just numbers and counting and technology.  In this type of culture, ironically, new instances of genius are probably less likely to recur ("numbers and control and counting" ideas aren't the sort of ideas that Einstein or Newton trafficked in, mathematical similarities here are a mere surface patina).  Indeed, by the 18th century the interesting science was again on the "outside" of the Enlightenment quantification bubble, in the thinking of Maxwell, Hamilton, and iconoclasts that gave rise to what we now call the "Romantic Age of Science", such as Joseph Banks. This, so soon after Condorcet proclaimed that all questions would be answered (and of course from within the framework he was proposing).  And so it's a clever twist to go ahead and explain away human ingenuity, genius, and all the rest as Kelly et al do, by striking derogatory, almost antagonistic attitudes toward human innovation.  What bigger threat to technological determinism exists, than a mind like that of Einstein's?  Better to rid the world of his type, which is in essence what Kurzweil's Law of Accelerated Returns attempts to accomplish, by ignoring the possibility of radical innovation that comes from truly original thinking.  

We've seen this modern idea before--it's scientism (not science), the shallower "interpretations" of the big ideas that regrettably step in and fashion theories into more practical and controlling policies.  Unfortunately, these "policies" also reach back into the deeper wellspring out of which truly original ideas originate, and attempt to rewrite the rules of the outsiders--the Einstein's and all the rest--so that they no longer pose a threat to a brave new world, a world of hyper-information, today.  This misguided human motive can reach such absurdity that indeed, the very possibility of original thought is explained away, as with Kelly.  To say that such a collection of views about human culture is selling us short is putting it mildly, indeed.

The point here is that flourishing culture inspires debate, original thinking, and a diversity of ideas.  In today's hyper-information world, neutrality and objectivity are illusory.  So, I fear, is genuine thinking.  When our thought leaders proclaim technological determinism, and provide us with such impoverished viewpoints that discussions of Einstein are deemed quaint and simply irrelevant in the coming techno-utopias, we're committing the fallacy once again of fashioning technocratic, debate-chilling policies and worldviews out of deeper and more profound original thoughts.  We ought instead to ask the hard questions about how we change history (and what needs to be changed), rather than produce bogus graphs and pseudo-laws (like the Law of Accelerated Returns), ignoring history and bigger ideas.  We need these ideas--and the thinkers who give birth to them. The future is open, and it's ours to make:  "Who should we be?"  And:  "how shall we live?"  These are the questions that every generation asks anew, and it's the current generation's task to reveal the full spectrum of possibilities to the next, or to allow for enough inefficiency and dissent that new possibilities can emerge.  We can't make Einsteins (Kurzweil's ideas notwithstanding), but we can make a culture where a sense of possibility is an intellectual virtue, and a humility about the future is a virtue, too, rather than ignorance or resistance to the supposed march of technological progress.  As always, our "progress" lies in our own, unexplored horizons, not in our tools.  Just ask Einstein.







Friday, April 4, 2014

Mankind in Transition

A strange, almost creepy book by Masse Bloomfield, a guy I'd never heard of before.  He offers a usable description of technology, from tools, to machines, to automation.  Argues that our destination is interstellar travel (in contrast to someone like Kurzweil, who would argue that our destination is the Machineland itself).  His view of robots, powered by Artificial Intelligence, appears to preserve the difference between biologically-inspired minds and machines.  He sees AI as necessary for controlling robots, which will become ubiquitous in military applications.  This suggestion seems entirely plausible, actually (set aside the ethical dimensions here), and his view of artificial intelligence as the software that controls robots for practical uses seems, well, plausible too.


Smarter Than You Think

Clive Thompson is fed up with all the cautionary digital talk these days.  A book arguing that the Internet is making us all smarter.  Take that, Nicholas Carr.
  

Rapture for the Geeks

A bit dated, but here's an excellent article on the links between checking your fav mobile device and addiction.  Speaking of dated, here's a NYT article going all the way back to 2000 on the connection between computer networks and biological systems (both evolving, as it were).  I found these references in the chapter notes of a quirky but eminently readable little book called Rapture for the Geeks, by Richard Dooling.  The subtitle is "When AI Outsmarts IQ" and its a semi-tongue-in-cheek look at Strong AI and visions of the future like the Singularity.

On the first article, you can Google "mobile phones and addiction" or what have you and get the gist.  Most of the discussion is wink-and-nod; I'm sure there are some serious studies out there.  The Huff Post talks about it here.  Whether compulsively checking or paying attention to your mobile is an "addiction" inherits all the baggage of talking about "addiction", but it's clear enough these days that many of us would be happier if we engaged in that sort of behavior less often.

On the second article, it seems there's a general, somewhat ill-defined notion out there that computational networks are evolving, and similarly (in some sense) to biological networks.  Or, rather, that the concept of evolution of complex systems is general enough to include technological evolution (of which the digital technology and especially the Internet is a subset).  This is a beguiling notion when viewed from afar, but when you zoom in on it, hoping for clarity, it's tough to determine the meat and potatoes of it all.  How is this system evolving like that one?  one is tempted to ask.  Or rather, if "evolution" is generic, what does it cover, then?  What doesn't evolve?  To nutshell all of this, say it thusly:  in what interesting, non-trivial, sense is technology evolving like we think biological species have (and are)?

Naturally skeptical of all the geek-rapture about smart machines, my hunch is that there's no real there there.  Technology is simply getting--well, we're getting more and more of it, and it's more and more connected.  And that's about all there is to the idea, when you analyze it with something like a thoughtful and healthy skepticism.  Nothing wrong with that; we could use more of it these days, it seems.

On the book, I dunno.  Read it if you're interested in the whole question of whether machines are becoming intelligent like humans.






Sunday, February 9, 2014

Kurzweil Can See the Future

This dude I love.  He says:  "I do expect that full MNT (nanotech) will emerge prior to Strong AI, but only by a few years (around 2025 for nanotechnology, around 2029 for strong AI).

Sweet!  This guy's got it figured.  I didn't know that humans could predict the future, but turns out that, armed with the Law of Accelerating Returns (a law Kurzweil made up, in essence, to describe how technological innovations are coming more rapidly today than they did, say, at the time of the invention of the printing press), we can predict the future of technology.  Of course, we've been morons about the future of tech until Kurzweil, but don't fret about a track record, as he's rewriting the rules here.  His predictions are scientific.

I envision all this in a used car type of advertisement:

Want Strong AI?  No problem.  That's 2029!!!  Nanotech?  Look no further.  That's 2025.

Dude, seriously.  If you know when an innovation will happen, you know enough about the innovation to make it happen today.  The philosopher Karl Popper pointed this out years ago, that predicting technological innovation amounts to knowing the innovation, which amounts to already knowing how to do it today.  Hence, the whole prediction of inventions is bogus.   Listen up, Kurzweil.  You're silly made-up laws about the exponential rate of technological change don't tell us what technologies are coming.  At best, they only tell us that new tech will keep popping up, and the gap between old tech and new tech will keep getting smaller.  That quantitative trend itself will likely change (say, because the world changes radically in some other way, or who knows).  But what we can say for certain is that the qualitative aspect--what technology is next--is outside the law of accelerating returns and outside prediction generally.

Sorry, Kurzweil.  But nice book sales.



Wednesday, February 5, 2014

Information Terms

I'll use Kurzweil again as I find that he's a spokesperson for the latest sci-fi thinking on smart machines and the like.  He does his homework, I mean, so when he draws all the wrong conclusions he does it with impressive command of facts.  He's also got an unapologetic vision, and he articulates it in his books in a way that critics and enthusiasts alike really know where he's coming from.  I like the guy, really.  He's just wrong.

For example, in his eminently skimmable "The Singularity is Near", he quips on page who-cares that the project of Strong AI is to reverse engineer the human brain in "information terms."  What is this?  Everything is information these days, but the problem with seeing the world through the lens of "information" or even "information theory" is that it's just a theory about transmitting bits (or, "yes/no"s).  Then, computation is just processing bits (which is really what a Turing Machine does, is traverse a graph with a deterministic, discreet decision at each node), and communications is just, well, communicating them.  But information in this sense is just a way of seeing process discretely.  You can then build a mathematics around it and processes like communication can be handled in terms of "throughput" (of bits) and "loss" (of bits) and compression and so on.  Nothing about this is "smart" or should even really generate a lot of excitement about intelligence.  It's a way of packaging up processes so we can handle them.  But intelligence isn't really a "process" in this boring, deterministic way, and so we shouldn't expect "information terms" to shed a bunch of theoretical light on it.

Intelligence is about skipping all those "yes/no" decisions and intuitively reaching a conclusion from background facts or knowledge in a context.  It's sort of anti-information terms, really.  Or to put it more correctly, after intelligence has reached its conclusions, we can view what happened as a process, discretize the process, graph it in "information terms" and wallah!, we've got something described in information terms.

So my gripe here is that "information" may be a ground breaking way of understanding processes and even of expressing results from science (e.g., thermodynamics, or entropy, or quantum limitations, or what have you), but it's not in the drivers seat for intelligence, properly construed.  Saying we're reverse engineering the brain is a nice buzz-phrase for doing some very mysterious thinking about thinking; saying "oh, and we're doing it in information terms" doesn't really add much.  In fact, whenever we have a theory of intelligence (whatever that might look like, who knows?), we can be pretty confident that there'll be some way of fitting it into an information terms framework.  My point here is that it's small solace for finding that illusive theory in the first place.

Shannon himeself--the pioneer of information theory (Hurray!  Boo!)--bluntly dismissed any mystery when formalizing the theory, saying in effect that we should ignore what happens with the sender and receiver, and how it gets translated into meaning and so on.  This is the "hard problem" of information--how we make meaning in our brains out of mindless bits.  That problem is not illuminated by formalizing the transmission of bits in purely physical terms between sender and receiver.  As Shannon knew, drawing the boundary at the hard problem meant he could make progress on the easier parts.  And so it is with science when it comes face to face with the mysteries of mind.  Ray, buddy, you're glossing it all with your information terms.  But then, maybe you have to, to have anything smart sounding to say at all.


Kurzweil's Confusion

The real mystery about intelligence is how the human brain manages to do so much, with so little.  As Kurzweil himself notes, the human brain "uses a very inefficient electrochemical, digital-controlled analog computational process.  The bulk of its calculations are carried out in the interneuronal connections at a speed of only about two hundred calculations per second ((in each connection), which is at least one million times slower than contemporary electronic circuits."

Kurzweil is making a case for the mystery of human intelligence.   Whenever the human brain, when viewed as a computational device, comes up so short, what needs to be explained then is how our vast superiority intelligent thinking is possible.  The more a purely computational comparison shows brains as inferior to computation, the more computation itself seems a poor model for intelligence.

When supercomputers like Crey's Jaguar achieve petaFLOP performance (a million billion floating point operations per second), and we still can't point to anything intuitive or insightful or human-like that they can do (like understand natural language), it suggests pretty strongly that brute computational power is not a good measure of intelligence.  In fact, Kurzweil himself makes this point pretty well, though of course it's not his intent.  To put it another way, when everything is computational speed, and humans lose the game, then true intelligence is clearly not computational speed.

To put it yet another way, the slower and crappier our "architecture" is when viewed as a glorified computer, the more impressive our actual intelligence is--and of course, the more the very notion of "intelligence" is manifestly not analyzable by computational means.

So much for Moore's Law leading us to Artificial Intelligence.  Next thought?