Popular Posts

Sunday, May 11, 2014

Artificial Intelligence Is Back, Or Not.

Last year John Markoff of the NYT wrote about the rapid advance of Artificial Intelligence.  This year, the New Yorker's Gary Marcus suggests that it's all hype.  Not only hype, but hype again.  Marcus, a cognitive psychologist at New York University, seems to have a point:  how many times have we heard that the "smart robots" are just around the corner?

You Do Want To Express Yourself, Don't You?

Tumblr, it seems, is bucking the trend on the Web toward one size fits all.  Tublr founder David Karp says the Web used to be chaotic and messy, to be sure, but also more fun, and personal.  Back in the salad days of the Web (before, say, the early 2000s), the lack of standards meant more of your personality and creativity could be expressed on blogs and websites.  Today, the utilitarian focus in the Valley, and the engineer's mindset of efficiency and control, have slowly squeezed out such possibilities, ironically rendering a Web obsessed with buzzwords like "personalization" must less personal.

Karp's not a lone voice here, either.  Jaron Lanier, Virtual Reality pioneer and author of the 2010 hit You Are Not A Gadget argues the same point.  In fact, Lanier's critique--first in YANAG then in his 2013 sequel Who Owns The Future?--is more trenchant than the milquetoast remarks from Karp bemoaning the cookie-cutter trends on the modern Web.  Lanier, for instance, thinks that so-called Web 2.0 designs favor machines and efficiency, literally at the expense of "personhood" itself.  For Lanier, the individual creator has no real home on the Web these days, as sites like Facebook force people to express their personalities in multiple choice layouts and formatted input boxes.  To Lanier, these surface designs evidence even deeper attacks on personhood, like redefining the very notion of "friend" to something shallow and unimportant.  The Web is dominated by a "hivemind" mentality where no one person really matters, and the collective is serving some greater purpose, like building smarter machines.  The Web, concludes Lanier, is set up to capture the machine-readable features of people for advertising purposes and other demeaning anti-humanist ends, not to enlarge and empower them as individuals.

Still, if you're a Lanier fan, Mr. Karp's remarks seem headed in the right direction, even if only because you can now customize the look and feel of your Tumblr blog on your mobile device.  But the deeper question here is whether you have anything much to say on a blog in the first place, and whether the Web environment is cordial and prepared to hear it, if you do.  Maybe Tumblr's counter-steer is minimal, but one hopes that further and more meaningful changes are still to come.

Friday, April 18, 2014

Rethinking Technological Determinism

Consider the case of Albert Einstein.  His now famous paper on Special Relativity, published in 1905, had such a seismic impact on our understanding of physics that it--along with his theory of General Relativity--eventually razed the entire edifice of classical mechanics, dating back to that other genius, Isaac Newton.  Relativity, as we now know, replaced Newton's laws governing objects traveling at constant velocity (special relativity) and undergoing acceleration (general relativity).  Back to 1905, in what's come to be called the "Anna mirablis", or "miracle year", Einstein published fully three other groundbreaking papers before the year's end, as well.  Taken together, they rewrote out basic ideas about the fundamental workings of the physical universe.  What a year it was.  Working at a patent office and without "easy access to a complete set of scientific reference materials", and additionally burdened by a dearth of "available scientific colleagues to discuss his theories", Einstein nonetheless plodded along in what today would be an almost ludicrously information-poor environment -- no Google scholar, no Facebook, no Twitter feeds--, publishing his landmark theories on the photoelectric effect, Brownian motion, mass-energy equivalence, and special relativity.  Collectively, the papers were to form the two central pillars of the New Physics:  Relativity and Quantum Theory.  

How did he do this, with so little to work with?  Contrast the case of Einstein--lowly Einstein, sitting in the patent office, cut off from vast repositories of other thinkers' theories-with much of the discussion these days about smarts and information.   I call today the age of "hyper-information."  Much of the hyper-information buzz comes out of Silicon Valley and is (of course) tied to "the Internet."  The Internet-visionaries (and here "Internet" as a "thing" is itself misleading, but this is another discussion) believe that we're getting "smarter", and technology itself is imbued with "smarts" and gets smarter, and that the world is changing for the better, and rapidly.   It's progress, everywhere.  The implication is that everyone will be an Einstein soon (or at least their best, most informed selves), and even our machines will be Einstein's too.  Kevin Kelly (of Wired fame) writes about the "intelligenization" of our world, and the famous (infamous?) entrepreneur/visionary Ray Kurzweil explains how we're even getting smarter in a quantifiable, measurable way:  we can graph innovation and intellectual endeavor as an exponential curve, the "Law of Accelerated Returns."  All this exciting talk, and we might be tempted to think Einstein's are springing up everywhere these days, too.  But no, unless I'm missing something.  On might be tempted, sure, but a cursory tour through today's "Internet" culture will quickly disabuse one of this notion (Tweet, anyone?).

But why not?  Why not truly big ideas, springing up everywhere, with all this super-information-guided-intelligence in the air?  Indeed, why do we still talk about "Einstein" at all?  (He's so, 1905.)  There's an easy answer, namely that "Einstein's don't come along too often in any era, so we'll just have to wait."  It's a good one, I think, but it's an uncomfortable fit with the exponential curve rhetoric about our rapidly "intelligenized" (that's not even a word!), hyper-informed world.  (Nassem Taleb, author of The Black Swan, suggested that we'd make better predictions in economics if we stopped reading all the "just-in" news sources.)  "Well, just how soon can we expect another Einstein-type genius?  We have problems to solve, after all!  Is he right around the corner, in a patent office (or maybe he's a Sys Admin), as yet undiscovered?"  

In fact the silliness of this inquiry belies an underlying theme that I think is a bit more troubling (and serious).  For, many of the Technorati--the digital visionaries and know-it-alls of all things technological--don't really want to discuss individual genius anymore.  The game has changed, and it's telling what's been left out of the new one.  Kelly goes so far to explain that "individual innovation" is a myth; what we call innovation is really the deterministic evolution of ideas and technology--it's predictable.  Whereas philosopher of science Karl Popper argued that technological innovation is intrinsically unpredictable (as is economic or social prediction), inventor Ray Kurzweil puts it on a graph.  And Kurzweil argues that it's not just the pace of technological change, but the nature of the innovations themselves that we can predict.  We know, for instance, that computers will be smart enough by 2040 (give or take) to surpass human intelligence ("intelligence" is always left poorly or un-analyzed, of course, and it's a safe bet that what's ignored is whatever Einstein was doing).  From there, we can pass the torch for innovating and theorizing to machines, which will crank out future "Einsteins", as machines will be smarter than all of the humans on the planet combined (Kurzweil really says this).  In other words, our "hyper-information" fetish these days is also a deliberate refocus away from singular human brilliance to machines, trends, and quantification.  Worrying about Einstein is yesterday's game; today, we can be assured that the world will yield up predictable genius in the form of the smart technology of the future, and from there, we can predict that superintelligent solutions and ideas will emerge, answer any pressing problems that remain.

But is this the way to go?  Is it just me, or does this all sound fantastical, technocratically controlling (surely not an "Einstein" virtue), and well, just plain lazy and far-fetched?  Consider:  (1) where is the actual evidence for the "intelligenization" of everything, as Kelly puts it?  What do we even mean here by intelligence?  We need need good ideas, not just more information, so how do we know that all of our information is really leading to "smarter"?  (Sounds like a sucker's argument, on its face.)  (2)  As a corollary to (1):  in a world where we are quantifying and measuring and tracking and searching everything at the level of facts, not ideas, who has time to do any real thinking these days, anyway?  Here we have the tie-in to Kelly and his "intelligenization" meme:  how convenient to simply pronounce all the gadgets around us as themselves intelligent, and thereby obviate or vitiate the gut feeling that thinking isn't happening in ourselves as much.  So what that we're all so un-Einstein like and acting like machines?  Smarts are now in our devices, thank God.  (And, murmured perhaps: "This has to rub off on us eventually, right?")  And finally (3), is automation--that is, computation--the sort of "thinking" we need more, or less, of today?  Might computation sometimes--occasionally, maybe frequently--in fact be at odds with the kind of silent, unperturbed contemplation that Einstein was doing, back in 1905, with his information-poor environment (even by the standards of his day)?  Could it be?

In fact history is replete with examples such as Einsteins, all suggesting that the "hyper-information" modern mindset is apples-and-oranges with the theoretical advances we require in human societies to make large, conceptual leaps forward, embracing new, hitherto unforeseen possibilities.  In astronomy, for instance, the flawed Ptolemaic models using perfect circles and devices like epicycles, equants, or [] were maintained and defended with vast quantities of astronomical data about the positions of planets (mainly).  No one bothered to wonder whether the model was itself wrong, until Copernicus came along.  Copernicus wasn't a "data collector" but was infused with an almost religious passion that a heliocentric model would simplify all the required fudges and calculations and data manipulations of the older Ptolemaic systems.  Like Einstein, his idea was deep and profound, but it wasn't necessarily information-centric, in today's sense.  The complex calculations required by Ptolemaic astronomers were impressive at the shallow level of data or information manipulation (as orbits were actually ellipses, not perfect circles, the numerical calculations required to predict movements of the known planets were extremely complex), but they missed the bigger picture, because the underlying conceptual model was itself incorrect.  It took someone sitting outside of all that data (though he surely was knowledgeable about the Ptolemaic model itself) to have the insight that led to the (eponymous) Copernican Revolution.

Similarly with Newton, who was semi-sequestered when working out the principles of his mechanics (Newton's three laws including his law of universal gravitation).  Likewise with Galileo, who broke with the Scholastic tradition that dominated 16th century Europe at the time, and was thinking about Democritus, an almost forgotten pre-socratic philosopher that predated Aristotle and had a negligible role in the dominant intellectual discussions of the day. []

Fast forward to today.  The current fascination with attempting to substitute information for genuine knowledge or insight fits squarely in the Enlightenment framework promulgated by later thinkers like Condorcet, and others.  In fact, while true geniuses like Copernicus or Galileo or Newton gave us the conceptual foundations (and one must include Descartes here, of course) for an entire worldview shift, the Enlightenment philosophes immediately began constructing a shallower rendition of these insights in the Quantifiable Culture type of thinking that reduced complex, human phenomena to quantifiable manipulations and measurements (can someone say, "mobile phone apps"?).  Whereas Copernicus was infused with the power and awe of a heliocentric universe (and the Sun itself was an ineffable Neo-Platonic symbol of divinity and intelligence), the Condorcet's of the world became enamored with the power and control that quantifying everything could confer upon us, at the level of society and culture.  And today, where a genius such as Alan Turing offered us insights about computable functions--and also proved the limits of these functions, ironically, even as he clarified what we mean by the "computation"--we've translated his basic conceptual advances into another version of the shallow thesis that culture and humanity is ultimately just numbers and counting and technology.  In this type of culture, ironically, new instances of genius are probably less likely to recur ("numbers and control and counting" ideas aren't the sort of ideas that Einstein or Newton trafficked in, mathematical similarities here are a mere surface patina).  Indeed, by the 18th century the interesting science was again on the "outside" of the Enlightenment quantification bubble, in the thinking of Maxwell, Hamilton, and iconoclasts that gave rise to what we now call the "Romantic Age of Science", such as Joseph Banks. This, so soon after Condorcet proclaimed that all questions would be answered (and of course from within the framework he was proposing).  And so it's a clever twist to go ahead and explain away human ingenuity, genius, and all the rest as Kelly et al do, by striking derogatory, almost antagonistic attitudes toward human innovation.  What bigger threat to technological determinism exists, than a mind like that of Einstein's?  Better to rid the world of his type, which is in essence what Kurzweil's Law of Accelerated Returns attempts to accomplish, by ignoring the possibility of radical innovation that comes from truly original thinking.  

We've seen this modern idea before--it's scientism (not science), the shallower "interpretations" of the big ideas that regrettably step in and fashion theories into more practical and controlling policies.  Unfortunately, these "policies" also reach back into the deeper wellspring out of which truly original ideas originate, and attempt to rewrite the rules of the outsiders--the Einstein's and all the rest--so that they no longer pose a threat to a brave new world, a world of hyper-information, today.  This misguided human motive can reach such absurdity that indeed, the very possibility of original thought is explained away, as with Kelly.  To say that such a collection of views about human culture is selling us short is putting it mildly, indeed.

The point here is that flourishing culture inspires debate, original thinking, and a diversity of ideas.  In today's hyper-information world, neutrality and objectivity are illusory.  So, I fear, is genuine thinking.  When our thought leaders proclaim technological determinism, and provide us with such impoverished viewpoints that discussions of Einstein are deemed quaint and simply irrelevant in the coming techno-utopias, we're committing the fallacy once again of fashioning technocratic, debate-chilling policies and worldviews out of deeper and more profound original thoughts.  We ought instead to ask the hard questions about how we change history (and what needs to be changed), rather than produce bogus graphs and pseudo-laws (like the Law of Accelerated Returns), ignoring history and bigger ideas.  We need these ideas--and the thinkers who give birth to them. The future is open, and it's ours to make:  "Who should we be?"  And:  "how shall we live?"  These are the questions that every generation asks anew, and it's the current generation's task to reveal the full spectrum of possibilities to the next, or to allow for enough inefficiency and dissent that new possibilities can emerge.  We can't make Einsteins (Kurzweil's ideas notwithstanding), but we can make a culture where a sense of possibility is an intellectual virtue, and a humility about the future is a virtue, too, rather than ignorance or resistance to the supposed march of technological progress.  As always, our "progress" lies in our own, unexplored horizons, not in our tools.  Just ask Einstein.







Friday, April 4, 2014

Mankind in Transition

A strange, almost creepy book by Masse Bloomfield, a guy I'd never heard of before.  He offers a usable description of technology, from tools, to machines, to automation.  Argues that our destination is interstellar travel (in contrast to someone like Kurzweil, who would argue that our destination is the Machineland itself).  His view of robots, powered by Artificial Intelligence, appears to preserve the difference between biologically-inspired minds and machines.  He sees AI as necessary for controlling robots, which will become ubiquitous in military applications.  This suggestion seems entirely plausible, actually (set aside the ethical dimensions here), and his view of artificial intelligence as the software that controls robots for practical uses seems, well, plausible too.


Smarter Than You Think

Clive Thompson is fed up with all the cautionary digital talk these days.  A book arguing that the Internet is making us all smarter.  Take that, Nicholas Carr.
  

Rapture for the Geeks

A bit dated, but here's an excellent article on the links between checking your fav mobile device and addiction.  Speaking of dated, here's a NYT article going all the way back to 2000 on the connection between computer networks and biological systems (both evolving, as it were).  I found these references in the chapter notes of a quirky but eminently readable little book called Rapture for the Geeks, by Richard Dooling.  The subtitle is "When AI Outsmarts IQ" and its a semi-tongue-in-cheek look at Strong AI and visions of the future like the Singularity.

On the first article, you can Google "mobile phones and addiction" or what have you and get the gist.  Most of the discussion is wink-and-nod; I'm sure there are some serious studies out there.  The Huff Post talks about it here.  Whether compulsively checking or paying attention to your mobile is an "addiction" inherits all the baggage of talking about "addiction", but it's clear enough these days that many of us would be happier if we engaged in that sort of behavior less often.

On the second article, it seems there's a general, somewhat ill-defined notion out there that computational networks are evolving, and similarly (in some sense) to biological networks.  Or, rather, that the concept of evolution of complex systems is general enough to include technological evolution (of which the digital technology and especially the Internet is a subset).  This is a beguiling notion when viewed from afar, but when you zoom in on it, hoping for clarity, it's tough to determine the meat and potatoes of it all.  How is this system evolving like that one?  one is tempted to ask.  Or rather, if "evolution" is generic, what does it cover, then?  What doesn't evolve?  To nutshell all of this, say it thusly:  in what interesting, non-trivial, sense is technology evolving like we think biological species have (and are)?

Naturally skeptical of all the geek-rapture about smart machines, my hunch is that there's no real there there.  Technology is simply getting--well, we're getting more and more of it, and it's more and more connected.  And that's about all there is to the idea, when you analyze it with something like a thoughtful and healthy skepticism.  Nothing wrong with that; we could use more of it these days, it seems.

On the book, I dunno.  Read it if you're interested in the whole question of whether machines are becoming intelligent like humans.






Sunday, February 9, 2014

Kurzweil Can See the Future

This dude I love.  He says:  "I do expect that full MNT (nanotech) will emerge prior to Strong AI, but only by a few years (around 2025 for nanotechnology, around 2029 for strong AI).

Sweet!  This guy's got it figured.  I didn't know that humans could predict the future, but turns out that, armed with the Law of Accelerating Returns (a law Kurzweil made up, in essence, to describe how technological innovations are coming more rapidly today than they did, say, at the time of the invention of the printing press), we can predict the future of technology.  Of course, we've been morons about the future of tech until Kurzweil, but don't fret about a track record, as he's rewriting the rules here.  His predictions are scientific.

I envision all this in a used car type of advertisement:

Want Strong AI?  No problem.  That's 2029!!!  Nanotech?  Look no further.  That's 2025.

Dude, seriously.  If you know when an innovation will happen, you know enough about the innovation to make it happen today.  The philosopher Karl Popper pointed this out years ago, that predicting technological innovation amounts to knowing the innovation, which amounts to already knowing how to do it today.  Hence, the whole prediction of inventions is bogus.   Listen up, Kurzweil.  You're silly made-up laws about the exponential rate of technological change don't tell us what technologies are coming.  At best, they only tell us that new tech will keep popping up, and the gap between old tech and new tech will keep getting smaller.  That quantitative trend itself will likely change (say, because the world changes radically in some other way, or who knows).  But what we can say for certain is that the qualitative aspect--what technology is next--is outside the law of accelerating returns and outside prediction generally.

Sorry, Kurzweil.  But nice book sales.



Wednesday, February 5, 2014

Information Terms

I'll use Kurzweil again as I find that he's a spokesperson for the latest sci-fi thinking on smart machines and the like.  He does his homework, I mean, so when he draws all the wrong conclusions he does it with impressive command of facts.  He's also got an unapologetic vision, and he articulates it in his books in a way that critics and enthusiasts alike really know where he's coming from.  I like the guy, really.  He's just wrong.

For example, in his eminently skimmable "The Singularity is Near", he quips on page who-cares that the project of Strong AI is to reverse engineer the human brain in "information terms."  What is this?  Everything is information these days, but the problem with seeing the world through the lens of "information" or even "information theory" is that it's just a theory about transmitting bits (or, "yes/no"s).  Then, computation is just processing bits (which is really what a Turing Machine does, is traverse a graph with a deterministic, discreet decision at each node), and communications is just, well, communicating them.  But information in this sense is just a way of seeing process discretely.  You can then build a mathematics around it and processes like communication can be handled in terms of "throughput" (of bits) and "loss" (of bits) and compression and so on.  Nothing about this is "smart" or should even really generate a lot of excitement about intelligence.  It's a way of packaging up processes so we can handle them.  But intelligence isn't really a "process" in this boring, deterministic way, and so we shouldn't expect "information terms" to shed a bunch of theoretical light on it.

Intelligence is about skipping all those "yes/no" decisions and intuitively reaching a conclusion from background facts or knowledge in a context.  It's sort of anti-information terms, really.  Or to put it more correctly, after intelligence has reached its conclusions, we can view what happened as a process, discretize the process, graph it in "information terms" and wallah!, we've got something described in information terms.

So my gripe here is that "information" may be a ground breaking way of understanding processes and even of expressing results from science (e.g., thermodynamics, or entropy, or quantum limitations, or what have you), but it's not in the drivers seat for intelligence, properly construed.  Saying we're reverse engineering the brain is a nice buzz-phrase for doing some very mysterious thinking about thinking; saying "oh, and we're doing it in information terms" doesn't really add much.  In fact, whenever we have a theory of intelligence (whatever that might look like, who knows?), we can be pretty confident that there'll be some way of fitting it into an information terms framework.  My point here is that it's small solace for finding that illusive theory in the first place.

Shannon himeself--the pioneer of information theory (Hurray!  Boo!)--bluntly dismissed any mystery when formalizing the theory, saying in effect that we should ignore what happens with the sender and receiver, and how it gets translated into meaning and so on.  This is the "hard problem" of information--how we make meaning in our brains out of mindless bits.  That problem is not illuminated by formalizing the transmission of bits in purely physical terms between sender and receiver.  As Shannon knew, drawing the boundary at the hard problem meant he could make progress on the easier parts.  And so it is with science when it comes face to face with the mysteries of mind.  Ray, buddy, you're glossing it all with your information terms.  But then, maybe you have to, to have anything smart sounding to say at all.


Kurzweil's Confusion

The real mystery about intelligence is how the human brain manages to do so much, with so little.  As Kurzweil himself notes, the human brain "uses a very inefficient electrochemical, digital-controlled analog computational process.  The bulk of its calculations are carried out in the interneuronal connections at a speed of only about two hundred calculations per second ((in each connection), which is at least one million times slower than contemporary electronic circuits."

Kurzweil is making a case for the mystery of human intelligence.   Whenever the human brain, when viewed as a computational device, comes up so short, what needs to be explained then is how our vast superiority intelligent thinking is possible.  The more a purely computational comparison shows brains as inferior to computation, the more computation itself seems a poor model for intelligence.

When supercomputers like Crey's Jaguar achieve petaFLOP performance (a million billion floating point operations per second), and we still can't point to anything intuitive or insightful or human-like that they can do (like understand natural language), it suggests pretty strongly that brute computational power is not a good measure of intelligence.  In fact, Kurzweil himself makes this point pretty well, though of course it's not his intent.  To put it another way, when everything is computational speed, and humans lose the game, then true intelligence is clearly not computational speed.

To put it yet another way, the slower and crappier our "architecture" is when viewed as a glorified computer, the more impressive our actual intelligence is--and of course, the more the very notion of "intelligence" is manifestly not analyzable by computational means.

So much for Moore's Law leading us to Artificial Intelligence.  Next thought?

Friday, January 31, 2014

Limiting Results in Science

Nicolaus Copernicus published his magnus opus, De Revolutionibus Orbium Coelestium, in 1543, shortly before he died.  With it, a series of sea changes rippled through Western Europe, and in the relatively miniscule span of about two hundred years, with Isaac Newton's publication of the Principia Mathematica, the Scientific Revolution had transformed the western world.   Before Copernicus the average 16th Century European believed the Earth was at the center of the cosmos, and the universe was governed by teleological principles first elucidated by Aristotle thousands of years ago.  The fusion of Aristotelian cosmology and physics with the Judeo-Christian tradition in Scholastic thinkers like Saint Thomas Aquinas provided Western Europe with a universe filled with purpose and destiny in the coming of Christ, and in the artistic vision and genius of Dante the great story of the universe and our place in it found a common expression.  This was the world into which Copernicus published his heliocentric model of the universe.

From the beginning, though, the Copernican Revolution as it came to be called, was a strange fusion of religious vision and empirical science.  On the one hand, Copernicus realized that Tycho Brahe's cosmology was hopelessly convoluted.  Since Plato, astronomers had assumed that celestial orbits would follow perfect circles, because such Platonic Forms were more exalted and therefore were proper concepts for the description of the cosmos.  Yet, perfect circles hopelessly complicated geo-centric models.  Tycho Brahe's geo-centric model--the model that Copernicus realized was pointlessly convoluted--predicted the movements of heavenly bodies only with the aid of epicycles, equants, and other mathematical devices that were designed (somewhat ironically, as it turns out) to accommodate the many deviations from perfect circle orbits postulated in the model.  (Think of an epicycle as a smaller hoop affixed to a larger hoop, so that deviations from traversing the larger hoop can be explained by placing the traversing object somewhere on the smaller hoops orbit.)  Yet even with the addition of such fudge-factors in Brahe's geocentric model, limits to the prediction of celestial motion were a commonplace.  In short, the models, though complicated, were also frustratingly inaccurate.

A heliocentric model was virtually unthinkable at the time of Copernicus, however, as the Earth was imported a divinely special status--it was the planet, after all, where Jesus had lived and where it was thought that the Divine Plan of the entire universe was unfolding.  This observation about the barriers to doing science in the culture of the late Middle Ages in Europe, under the cloak of Catholicism, as it were, has formed part of the story of the Scientific Revolution ever since.  And, more or less, it's correct.  What's less appreciated, however, is that Copernicus himself felt divinely inspired; his religious views of the glory of the sun inspired or rther sustained his belief that the sun must be the center of the cosmos.  Only an object as glorified as the sun could fill such a role.  The Copernican Revolution then, the kick-off of what came to be called the Scientific Revolution, was a triumph of religious vision and fervor as much as empirical reality.

Copernicus was right, of course.  He needn't have held such elevated views of the sun.  He needn't necessarily have been infused with Greek neo-platonism or other-worldly thoughts at all.  His helio-centric model though carefully written to avoid conflict with the Catholic Church, would chip at the monolithic synthesis of religion and science under Scholasticism until a fissure formed, deepened with Galileo, and eventually split the entire intellectual world open with Newton.  After Newton, religion was divorced from science, and science was indeed liberated from the constraints of religious tenants and other non-empirical world views.  It was, indeed, a revolution.

But although early thinkers like Copernicus and even Newton were intoxicated with visions of a cosmos full of wonder and deity and immaterial reality (Newton once famously speculated that angels were partially responsible for gravitation, which he claimed he only described mathematically, and had not explained in any deeper sense), by the beginning of the 19th Century the flight from immaterial conceptions of the universe was nearly complete.  The philosopher and mathematician Rene Descartes laid much of the groundwork for what would become known as the "official" scientific worldview in the 19th Century:  Scientific Materialism.  There never was a consensus about the metaphysical presuppositions after the Scientific Revolution, but in practice the cultural and intellectual consequences of the revolution were a profoundly materialistic underpinning for Science, conceived now as a distinct and privileged activity apart from religion or the humanities.  Matter and energy was all that existed.  And to provide a conceptual framework for "matter and energy" was Descartes' life work.

Descartes himself was a theist, and the Cartesian conception of reality is known as Substance Dualism:  there are two basic substances, matter (or matter and energy), and an immaterial substance conceived of as mind or soul.  Everything fits into one of these two categories in the Cartesian framework.  Before, the "material world" was not separable from a mental realm completely.   []

Galileo's focus on primary properties like quantity, mass, and so on.  The other secondary properties were relegated to the Immaterial Realm.  Descartes would later attempt to prove, famously, that as God must exist (his "cogito, ergo sum") because the idea of God could not be doubted (and God would not deceive us), that therefore the human mind existed apart from the material world.  In practice, however, as science achieved impressive and ever growing mastery of knowledge about the world, the Immaterial Realm became less and less important, and implausible.  What we couldn't explain scientifically would end up in the immaterial realm.  But the progress of science seemed to suggest that such a strategy was a mere placeholder, for as the consequences of the Scientific Revolution were fully felt, even something as sacrosanct as the human mind would eventually yield to the march of science, and be explainable in purely material terms.  Hence, the original substantive division of body and mind -- material and immaterial -- tended to collapse into a monistic materialism.  Mind, it seemed, was a mere fiction, much like religious notions about the cosmos turned out to be after Copernicus.

Yet, once one half of the Cartesian framework is removed, the remaining Material Realm is relatively simplistic.  Whereas Aristotle and Greek thinking generally postulated secondary qualities like tastes and smells and colors as part of the purely physical world, along with rich conceptual structures like forms, the Caresian materialist framework was minimalist, and consisted only in a void full of atoms--uncuttables-- and the mathematics necessary to measure and count and explain the movements of these particles.  The universe in the Cartesian framework was suddenly dry, and simple, and, well, bleak.  Hence what began as a full throated metaphysical dualism capable of sustaining a belief in an infinite Deity ended in a simple view of materialism amenable to doing science as it was done by the great minds of the Revolution.  Mathematics, and matter, and all else was fiction.

By the nineteenth century, then, the philosopher and scientist Simon-Pierre Laplace could proclaim that "he had no need of that hypothesis" when confronted with questions about how God fit into science.  Science, which began in a maelstrom of broad and speculative metaphysics, in grand, exalted concepts of things had in the span of a hundred years adopted not only a distinct and often hostile stance opposite of Western religion (and in particular the Judeo-Christian tradition), but had eschewed the "mind" or immaterial half of the Cartesian framework, adopting the other, minimalist, half instead.

Yet, Cartesian materialism has proven remarkably fruitful over the years.  If we think of the universe as simply matter and energy, and go about observing it, formulating hypotheses expressible in mathematics, and confirming these hypotheses with experiments (ideally), we end up with modern science.

Are there any limits to scientific enquiry?  Yes, and in fact the actual practice of science exposes limits seemingly as much (or as significantly)

What Copernicus, Galileo, Newton, and the other heroes of the Scientific Revolution gave us were Progressive Theories--pieces of knowledge about the physical world that showed us, in a postive way how things went, and how we could explain them.

[bridge into discussion of limits]


The Puzzle of Limits



In a trivial sense, scientific knowledge about the physical world is always limiting.  The inverse square law specifies the force of gravity in Newtonian mechanics:  for two objects the gravitational force between them is inversely proportional to the square of their distance.  This is a limit in a trivial sense because gravity can't now be described using some other equation (or any equation); it's not, for instance, inversely proportional to the triple of their distance.  But nothing really turns on this notion of limits, and indeed the very point of science is to find actual laws that govern the behavior of the world, and these laws will have some definite mathematical description or other.  When we say that pressure is related to volume in Boyle's Law, for instance, we don't feel we have a law until we've expressed the relationship to gas pressure and volume as a specific equation, which necessarily precludes other, different, equations.  All of this is to say we can dispense with the trivial notion of limits.

What's more interesting are cases where scientific investigation reveals fundamental limits to our knowledge of certain phenomena in the world.  Like with the Newton's Inverse Square Law or Boyle's Law for gases, we've isolated a physical systems and described it's causal or law-like behavior in mathematical terms (algebraic equations in the two examples), but once we have this correct account, it turns out that there are inherent limitations to how we can use this knowledge to further explain or predict events or outcomes in the system.  The system itself, one might say, once correctly described in the language of science, has characteristics that prevent us from knowing what we wish to know about it, or using the knowledge we do have in certain desired ways.

The first major limiting result in this non-trivial sense probably came from Maxwell's work in thermodynamics in the 19th century.  Entropy, as it came to be known, is perhaps the ultimate limiting result.
[]

The 19th Century had other surprises,  Henri Poincare, the great French mathematician, proved that the famous "Three Body Problem" was unsolvable, and in so doing anticipated much of the modern field of Chaos Theory.  The Three Body Problem states that ...

By comparison to the 20th Century, however, limiting results emerging from work in the 19th Century have been tame.  Two major 20th Century advances--one in physics and the other in mathematics--have ushered in sea changes to modern science that have greatly altered our Enlightenment notion of the nature and limits of science.  In physics, Heisenberg's Uncertainty Principle demonstrated that at the quantum level, we can't isolate the position and momentum of a particle simultaneously.  To get arbitrary precision of a particle's position, we necessarily change it's momentum, and likewise isolating the momentum of a subatomic particle limits our ability to pinpoint its position.  The Uncertainty Principle thereby established that as scientific investigation turns to the "really small", or subatomic scale of the universe, there are boundaries to our knowledge of physics.  It's important to note here that the Uncertainty Principle is not provisional, a result based on current limits to technology or to the state of physics in the early 20th century.  Rather, it's a valid result in general; it holds for any measurement of subatomic phenomena, anywhere, at any time.  It's a fundamental limit that we've discovered about the nature of our world, when we turn our investigation to subatomic scales.

Yet, for all the humbling implications of Heisenberg's principle, it helped launch modern quantum mechanics.  As is often the case, discovering what we can't know ends up more fruitful for science than discoveries about what we can.  Armed with the Uncertainty Principle, scientists were able to frame hypotheses and investigations into the nature of quantum phenomena and further develop the statistical framework of modern quantum mechanics.  Indeed, the notion that deterministic knowledge isn't fully possible in subatomic realm, and thus that a statistical distribution of possible outcomes must be provided, is one of the key insights of the new physics.  Not knowing our limitations, the statistical framework for quantum mechanics may not have fit into place so rapidly and intuitively as it did in the last century.  Again, limiting results in science have proven art of the backbone of progress in science, however paradoxical this may seem.

To wrap our minds around the significance of the productivity of limiting results in science, we might employ some metaphors.  Call the "limitless progress" assumptions undergirding the Scientific Revolution and 19th century scientific materialism (with Mach and others) Limitless Science.  A metaphor for Limitless Science will be something constructive, evoking the notion of continual progress by building something.  We might conceive of the results of Limitless Science as a crisscrossing of roads and infrastructure on a smooth round sphere (like the Earth, say, but without land marks like canyons or mountains that obstruct road-building).  To get to any location on the sphere of Limitless Science, you simply plot out the distance, allow for conditions like rain or say hills or sand or swamps, and lay out your roadway.  Each time a road is built, another location is accessible on the globe.  Continue in this way and eventually anyone can get anywhere on Limitless Science planet.  (To avoid having every square inch of the planet covered in roadways, we might stipulate that places within a bike ride or a walk from some road don't need their own roads.)

Beginning with the Second Law of Thermodynamics and moving through the 19th century to Poincare's insight into the chaotic behavior of complex systems on up through the 20th century we see that the limiting results stand in stark contradistinction to Limitless Science.  In fact, we'll need a different metaphor to visualize scientific progress in this world.  Our globe of ever-increasingly roadways doesn't capture the fact that many times our inroads to results end up in dead-ends.  So, by contrast, Limiting-Discovery Science isn't a pristine spherical object where any road (theory) can reach any destination, but rather obstructions like a Grand Canyon or a raging river, or a range of impassable mountains dot the landscape.  Building roads on Limiting-Discovery Planet is not a matter of plotting a straight line from a beginning point to a destination, but rather in negotiating around obstructions (limiting results) to get to final destinations.  We can have fun with this metaphor:  if Heisenberg's Uncertainty Principle is a Grand Canyon to be navigated around, then say the Second Law of Thermodynamics is the Himalayas, and Chaos Theory is the Pacific.  The point here is that scientific investigation discovers these impassable landmarks and our knowledge of the world then proceeds along roads we've engineered as detours in light of these discoveries.  And, to push the metaphor a bit, we find out things we can't do--places we can't go--unlike our Limitless Science globe, with its smooth, traversable surface.  It's no use building a road through the Grand Canyon, or over Mount Everest.  The features of this world eliminate options we thought we had, before the discoveries.  Likewise, of course, with scientific discovery itself.

To expland on this point a bit more, what's interesting about the metaphor is it helps us see that, in a way, every truth we discovery about the world around us is fruitful and progressive.  Discovering the Grand Canyon is a landmark of the Southwestern United States is only limiting if we'd assumed that our human ambitions to build roads all over our world would never be frustrated, by "facts on the ground" so to speak.  But scientific investigation is in the end about discovering truths, and these truths are fruitful even when limiting (when we first assume perfect, linear progress) because knowing how the world really is, is bound to be fruitful.  If you can't, after all, drive across the Grand Canyon, it's fruitful to know that fact.  You can then build a system of roads that skirts around it, and you're on your way.  Similarly with scientific discovery, when we realize we can't, for instance, isolate with arbitrary precision the position and momentum of a subatomic particle simultaneously, this knowledge about the observational limits of subatomic phenomena paves the way for a formulation of quantum mechanics in terms of statistics and probability, rather than causal laws that presuppose knowledge of impossibilities, and such results generate successful predictions elsewhere, along with technological and engineering innovations based on such results.  We learn, in other words, what we can do, when we discover what we can't.  And so it is with science, just as in other aspects of our lives.

This brings us to our next major point, which is that, unfortunately, scientific materialists hold metaphysical assumptions about the nature of the world that tend to force Limitless Science types of thinking about discovery.  If the entire universe is just matter and energy--if Physicalism is true, in other words--then every impossibility result emerging from scientific investigation is a kind of failure.  Why?  Because there's nothing mystical or immaterial about the universe, anywhere, and so one naturall assumes the sense of mystery and wonder will gradually give way as more and more physical knowledge is accumulated.  If Chaos Theory tells us that some physical systems exhibit a sensitive dependence on initial conditions such that long-range prediction of events in these systems is effectively impossible, this means only that with our current differential techniques (say, Lavier-Stokes equations for fluid dynamics) we have some limits in those types of systems.  Since there's nothing much going on but unpredictability from the properties of chaotic systems, there's sure to be some advances in our ability to build roads that will gradually whittle away the limitations here.  And to the extent that this isn't fully possible, it expresses only a fact about our limited brains, say, or the limits of computation given the complexity of the world.

To put all this another way, scientific materialists are committed to seeing limiting results in science either as placeholders until better methods come around, or as lacunae in our own noetic capabilities.  You might say this is the "we're too primitive" or "we're too stupid" response to limiting results.  What is manifestly not possible with a materialist presupposition is that the limits point to real boundaries in our application of physical concepts to the world.

Roughly, there are two two possibilities that materialists will ignore when confronted with Grand Canyons or the Himalayas on Limiting-Discovery Planet.  One, the Cartesian materialism presupposed in science since the Enlightenment might be wrong or incomplete, so that some expanded framework for doing science is necessary.  Two, there may be immaterial properties in the universe.  In this latter case, the reason we can't lay roadwork down through the parts of Arizona that intersect with the Grand Canyon is simply because there's no physical "stuff" there to work with; the Grand Canyon is a metaphor for something that is not reducible to matter and energy.  This is an entirely possible and even reasonable response when contemplating the meaning of the limiting results (i.e., we're up against a part of the universe that isn't purely material, which is why material explanations fall short), but they won't be entertained seriously by scientific materialists.  Again, since all of the universe is just assumed to be matter and energy (and we have historical, roughly Cartesian accounts of matter and energy, to boot), limiting results will always end up as commentary about humans-when-doing-science (that we're either too primitive still to get it, or just too stupid, which is close to the same idea sans the possibility of future progress).

But we can reign all this philosophical speculation in, for the moment (though we'll have to return to it later).   We've left out maybe the most interesting limiting result not only of the last century, but perhaps ever.  It's arguably the most fruitful, as well, as its publication in 1931 led step by step to the birth of modern computation.  This is rather like discovering the Grand Canyon, reflecting on it for a while, kicking around in the hot desert sands, and then realizing a design for an airplane.  The result is Godel's Incompleteness Theorems.  As its the sin qua non of our thesis--limiting results resulting in fruitful scientific research--we'll turn to it in some detail next.

[Godel's Theorem, Halting Problem, modern computation, computational complexity versus undecideability, AI]

 AI-A Giant Limiting Result

Yet, the lessons of science seem lost on the Digital Singularity crowd.  The apposite metaphor here is in fact the one we rejected in the context of actual science--Limitless Science Planet.  Everything is onward and upward, with progress, progress, progress.  Indeed futurists and AI enthusiast Ray Kurzweil insists that the lesson of the last few hundred years of human society is that technological innovation is not only increasingly but exponentially increasing.  Such a nose thumbing take on intellectual culture (including scientific discovery, of course, which drives technological innovation) excludes any real role for limiting results, and suggests instead the smooth, transparent globe of Limitless Science.  Indeed, as we build roads we become more and more capable of building better roads more quickly.  In such a rapidly transfiguring scenario, it's no wonder that Kurzweil and other Digital Singularity types expect Artificial Intelligence to "emerge" from scientific and technological progress in a few years time.  It's assumed--without argument--that there are no Grand Canyons, or Mount Everests, or Pacific Oceans to fret about, and so there's nothing theoretical or in principle that prevents computation--networks of computers, say--from coming alive into a super intelligence in the near future (we get the "near" part of "near future" from the observation that technological innovation is exponentially increasing).  This is Limitless Science at its finest.

But, in general, we've seen that Limitless Science isn't true.  In fact, there are lots of features of the actual world that the accretion of scientific knowledge uncovers to be limiting.  And indeed, the history of science is replete with examples of such limiting results bearing scientific and technological fruit.  The actual world we discover using science, in other words, is vastly different than Digital Singularitists assume.  It's time now to return to the question of whether (a) an expanded framework for science or (b) an actual boundary to materialism in science is required.  But to do this, we'll need to tackle one final limiting result, the problem of consciousness.









Tuesday, January 28, 2014

Prolegomena to a Digital Humanism

"What makes something fully real is that it's impossible to represent it to completion."  - Jaron Lanier

The entire modern world is inverted.  In-Verted.  The modern world is the story of computation (think:  the internet), and computation is a representation of something real, an abstraction from real particulars.  The computation representing everything and connecting it together on the internet and on our digital devices is now more important to many of us than the real world.  I'm not making a retrograde, antediluvian, troglodyte, Luddite point; it's deep, what I'm saying.  It's hard to say clearly because my limited human brain doesn't want to wrap around it.  (Try, try, try -- if only some computation would help me.  But alas.)

The modern world is inverted to the extent that abstractions of reality become more important than the real things.  A computer representation of an oil painting is not an oil painting.  Most people think the representation is the wave of the future.  The oil painting is, actually.

What's a computer representation of a person?  This is the crux of the problem.  To understand the problem we have to understand two big theory problems here, and I'll be at some pains to explain them.  First, suppose I represent you -- or "model" you, in the lingo -- in some software code.  Suppose for example I model all the employees working at a company because I want to predict who fits best for a business project happening in, say, Europe (It's a big corporation with huge global reach and many business units, like IBM.  No one really knows all the employees and who's qualified for what, except locally perhaps.  The example is real-world).  That necessarily means I have a thinner copy of the "real" you-- I may not know you at all, so I'm abstracting away some data stored in a database about you--your position, salary, latest performance reports, the work you do, a list of job skills.  Because abstractions are simplified models of real things, they can be used to do big calculations (like a database operation that returns all the people who know C++); it also means they leave out details.  Abstractions conceived of as accurate representations are a lie, to put it provocatively, as the philosopher Nietzsche remarked once (He said that Words Lie.  You say "this is a leaf", pointing to a leaf.  There is no such thing as a "leaf" as an abstract concept.  There are only leaves...).

All representations are in a language, and every language has limits to its expressiveness.  Natural language like English is most expressive, which is why a novel or a poem can capture more about the human experience than mathematics can, or computer code.  This point is lost on many Silicon Valley 'Singularity' types--technologists and futurists who want computation to replace the messy real word.

Change the example if you want, because abstracting the real world and especially human behavior into slick computer models is all the rage today.  Examples abound.  Say I go shopping at the same store.  I shop at Safeway.  Whenever I go to Safeway, I get a bunch of coupons when I check out at the self check out.  The coupons are strangely relevant--I get deals on protein bars and chocolate milk and so on.  Funny thing is that I buy all those items, but I didn't necessarily buy any of the coupon items when I received those coupons.  What's happening here is that Safeway has made a model of "me" in its databases, and it runs some simple statistics on my purchases as a function of time (like:  month by month by item type, say), and from this data it makes recommendations.  People like this sort of service, generally.  You talk to technologists and the people who're modernizing the consumer experience and you'll get a vision of our future:  walk into the supermarket and scan your ID into the cart.  It starts directing you to items you need, and recommending other items "I noticed you bought tangerines the other day, you might like tangelos too.  They're on sale today on Aisle 5."

Now, nothing is wrong here, it's just a "lie" of sorts is all.  I'm generally not my Safeway model is all.  Not completely.  The model of me that Safeway has is based on my past buying patterns, so if I change anything or if the world changes, it's suddenly irrelevant and it starts bugging me instead of helping me.  It's a lie, like Nietzsche said, and so it gets out of sync eventually with the actual me that's a real person.  I don't buy chocolate but on Valentines Day I do, say.  Or I'm always buying ice cream but last week I started the Four Hour Body diet and so now I only buy that on Saturdays, and I buy beans all the time now.  But right when I get sick of them and start buying lentils the system has a good representation of me as a bean-buyer, so now I'm getting coupons for beans at precisely the time I'm trying to go no-beans (but I'm still into legumes).  Or I'm running errands for someone else, who loves Almond Milk.  Almond Milk is on sale but I don't get that information; I only see 2% Lactose Free milk is on sale because I usually buy that.  The more the model of me is allowed to lord over me, too, the worse things get.  If the cart starts pulling me around to items that I "need", and it's wrong, I'm now fighting with a physical object -- the shopping cart -- because it's keeping me from buying lentils and Almond Milk.  None of this has happened yet, but welcome to creating mathematical objects out of real things.  The computer can't help with any of my buying behavior today, because it's got a stale, simple model of me based on my buying behavior yesterday.  That's how computers work.  (Would it be a surprise to learn that the entire internet is like this?  I mean:  a shallow, stale, simple model of everything?  Well, it is.  Read on.)

Let's finish up with abstraction.  Someone like Mathew Crawford, who wrote the best-selling "Shop Class as Soul Craft", dropped out of a six figure a year job at a think tank writing about politics in D.C. to fix motorcycles, because when he realized the modern world is inverted, and abstractions are becoming more important than real things and experiences, he was desperate to find something meaningful.  He wasn't persuaded, like the Silicon Valley culture seems to be, that all these abstractions are actually getting smarter and smarter and making us all better and better.  He opened a motorcycle repair shop in Virginia and wrote a book about how you can't rely on abstractions and be any good at all fixing real things like motorcycles.

This is an interesting point, actually.  Crawford's an interesting guy.  You could write a dissertation alone on how difficult it is to diagnose and fix something complicated.  You can download instructions and diagnostics from the internet, but you're not a real mechanic if you don't feel your way through the problem.  Computation is supposed to replace all of this embarrassing human stuff like intuition and skill and judgement.  "Feeling our way" through things is supposed to be in the past now, and really the pesky "human element" is supposed to go away too.

A confusion of the modern inverted age is that as computers get smarter (but they don't, not like people do), we're supposed to get smarter and better, too.  But all this sanguine optimism that everything is getting "smarter" disguises the truth, which is that it's impossible for us to get "smarter" by pretending that computers are smarter--we have to choose.  For example, if we pretend that abstractions are "smart", we have to fit into them to continue the illusion.  If we start imposing the messy reality of life onto things, the abstractions will start looking not-so-smart, and then the entire illusion is gone.  Poof!  To the extent that we can't handle exposing our illusions, we're stooping down to accommodate them.  All this becomes clear when you open a motorcycle repair shop and discover that you have to feel your way through the problem and abstractions of fixes don't really help.

So much for Crawford.  There are so many Crawford's today, actually.  I think it's time to start piecing together the "counter-resistance" to what Lanier calls the Digital Maoists or Cybernetic Totalists--the people saying that the abstractions are more real and smart than what's actually real and smart.  The people saying the human element is old news and not important.  The people saying that the digital world is getting smarter and coming alive.  If it sounds crazy (and it should), it's time to start pointing it out.

I can talk about Facebook now because I don't like Facebook at all and almost everyone I know seems to be obsessed with it.  (It makes me reluctant to complain too much or when I'm aggravated it emboldens me to complain with a kind of righteous indignation.)  Facebook is a model of you for the purposes of telling a story about you.  Who is reading the story and why?  On the internet this is called "sharing" because you connect to other models called "friends" and information about your model is exchanged with your Friend-Models.  Mostly, this trickles down to the actual things--the people--so that we feel a certain way and receive a certain satisfaction.  It's funny that people who use Facebook frequently and report that they have many friends on Facebook also report greater degrees of loneliness in the real world.  Which way does the arrow of causality go?  Were they lonely types of people first?  Or does abstraction into shallow models and using emotional words like "friends" and "social" make their actual social existence worse somehow?

I don't think there's much wrong with a shallow Facebook model of me or you, really.  Facebook started out as a way to gawk at attractive nineteen year old Harvard women, and if you want to do this, you need an abstraction that encourages photo sharing.  I don't necessarily want this experience to be deep either.  I don't want three hundred friends to have a deep model of me online, necessarily, either.

Theoretically, though, the reason Facebook models are shallow is the same reason that Safeway only wants my buying behavior in my Supermarket Model.  Since "Facebook" is really a bunch of servers (a "server" is a computer that services other computers--it's a computer is all), then what the real people who own Facebook can do is determined by what Facebook computers can do, with our models.  Since computers are good at doing lots of shallow things quickly (think the Safeway database), why would Facebook have rich models of us?  Then they couldn't do much with them.  It's an important but conspiratorial-sounding point that most of what Facebook wants to do with your Facebook model, connected to your Facebook friend models, is run statistics on what ads to sell you.  It's another significant but bomb-shell type of observation here that all the supposed emerging smartness of the World Wide Web is laser-focused on targeted advertising.  All this liberation we think we feel is really disguising huge seas of old-fashioned persuasion and advertising.  Because everything we get online is (essentially) free -- think Facebook -- it's no wonder that actual money will be concentrated on ads.  (Where does the the actual earned money come from still, to buy the advertised goods and services?  That gets us back into the messy real world.)

So much for abstraction.  Let's say that abstraction is often shallow, and even vapid.  It's incomplete.  This says something true.  It means that we ought to think of computation as a shallow but convenient way to do lots of things quickly.  We shouldn't confuse it with life.  Life here includes the mystery of human life:  our consciousness, our thoughts, and our culture.  We confuse abstractions with the fullness of life at our peril.  It's interesting to ask what vision of digital technology would support a better cultural experience, or whether shallow, infantile, ad-driven models are the best we can do.  Maybe why we cheer lead so loudly for Facebook and the coming "digital revolution" is because we think it's the only way things can turn out, and it's better than horses and buggies.  This is a really unfortunate way to think about technology and innovation I would say...

The second way the modern world is inverted-- the second theoretical problem with treating computer models as reality -- is known as the Problem of Induction (POI).  Someone like former trader Nasam Nicholas Taleb describes the POI as the problem of the Black Swan.  Most swans -- the vast majority of swans -- are white, so eventually you generalize in your code or your database or your mind to think something like "All swans are white."  Taleb calls this a Gaussian distribution (or normal distribution) because you don't expect there to be outliers that screw everything up.  Taleb says that sometimes the real events in the world are not so much like Gaussian distributions but are like exponential ones.  He calls this the Black Swan phenomenon.  It's tied to the ancient POI as I'll explain.  I mean:  when a Black Swan shows up and we all thought "All swans are white."

We'll say first that a Gaussian or normal distribution is like the height of people in the real world.  Most people are between 5 and 6 feet tall, the vast majority in fact.  This is a normal distribution.  It's rare to have a 7' guy or a 4' one, and essentially impossible to have a 9' one or a 3' one.  If human height was like an Exponential Distribution, though, or a "Black Swan", then occasionally there'd be a guy that was a hundred feet tall.  He'd be rare, but unlike with the Gaussian, he'd be guaranteed to show up one day.  This would screw something up no doubt, so it's no wonder that we prefer the Gaussian for most of the representing we do of the actual world.

Taleb explains, however, that when it comes to social systems like the economy, we unfortunately get Black Swans.  We get 100 feet tall people occasionally, or in other words we get unpredictable market crashes. We can't predict when they'll happen, he says, but we can predict that they'll come around eventually, and screw everything up.  He says that when we create shallow abstractions of real economic behavior like with credit default swaps and derivatives and other mathematical representations of the real world, we are guaranteed to get less predictable behavior and really large anomalies (like 100 feet tall people).  So he says that the economy is not Gaussian.

All of this is well and good but all the computer modeling is based on Gaussian principles.  This is what's called Really Bad, because we're relying on all that modeling, remember.  It means that as we make the economy "digital" with shallow mathematical abstractions (like default swaps), we also make it more of a "lie", in so far as the Black Swan feature will tend to get concealed in the layers of Gaussian computation we're using to make money.  All the money is made possible when we get rid of the rich features of reality, like actual real estate, and digitize it.  If we know that, sooner or later, we're guaranteed to lose all the money we've made, because the future behavior of these systems contains a Black Swan, but our computer models assure us that the swans are all white, do we care?  As long as we make the money now, maybe we don't.  If we know we're getting lonely on Facebook but we still have something to do at night with all of our representations of friends, do we care?  It takes some thought to figure out what we care about and whether we care at all.  (It's interesting to ask whether we start caring, as a rule, only after things seem pretty bad.)

This is the case with the economy, it seems.

So the second big theory problem with the inverted modern world is that computation is inductive.  This is a fancy way of saying that the Safeway database cannot figure out what I might like unless it's based on what I've already proven I like.  It doesn't know the real me, for one.   It knows the abstraction.  And even more importantly, because computation is inductive, it must always infer something about me or my future based on something known about me and in my past.  Human thought itself is partly inductive, which is why I'll expect you to show up at around 5 pm at the coffee shop on Thursdays, because you always do.  But I might also know something about you, like say that you're working at 5.

Knowing that you're working at 5 on Thursday is called "causal knowledge", because I know something about you instead of just the past observations of you showing up.  I have some insight about you.  It's "causal" because if you work at 5 on Thursday, that causes you to be there regardless of whether you've shown up in the past.  It's a more powerful kind of knowledge about you.  We want our computers to have insights like this but really, they are more at home with a database full of entries about your prior arrivals on Thursdays at 5.  The computer really doesn't care or know why you show up.  This is induction.

Induction applies to the Black Swans in stock market crashes because we were all thinking that "All swans were white" based on our computer models of the past.  Those models were wrong, it turns out, so we didn't see the Black Swan coming.  If we hadn't been convinced the computer models were so smart, we might have noticed the exponential properties of the system.  Or:  we might have noticed the inherent, real world volatility that we were amplifying by abstracting it, and relying on inductive inferences instead of causal knowledge or insight.  Computers are very good at convincing us we're being very smart about things by analyzing all those huge data sets from the past.  When something not in that past shows up, they're also very good at making things become chaotic.  This is a reminder that the real world is actually in charge.

It's very complicated to explain why computers don't naturally have the "insight" or "causal knowledge" part of thinking that we do (and why they can't really be programmed to have it in future "smarter" versions either).  Generally Artificial Intelligence enthusiasts will insist that computers will get smarter and eventually will have insights that predict the Black Swans (the very ones they've also made possible).  In general however the Problem of Induction, which is a kind of blind spot (to go along with the "lie" of abstraction) is part and parcel of computation.  If you combine this inductive blindness with the shallowness of the models, you get a world that is really good at doing simple things quickly.  If you question whether this is our inevitable future, and whether perhaps there are entirely new vistas of human experience and culture available to us (including the technology we make), I think you're on the right track.

Here is a representation of me: 42 73 1 M.  What does it mean?  I once used something called "log linear" modeling to predict who would pay traffic tickets in data provided by the state of Iowa (true). We used the models of hundreds of thousands of people with database entries like this example, but more complicated, to predict those with greater than some number n likelihood to never pay.  Then we recommended to the state of Iowa not to bother with these people.  It worked pretty well, actually, which is why we make shallow representations for tasks like this...

What's funny about technologists is how conservative they are.  A hundred fifty years ago, the technologists were passionately discussing the latest, powerful methods for extracting whale oil from the blubber of Sperm and Baleen whales harpooned and gutted against the wooden hulls of Atlantic Whalers.  No one stopped to wonder whether the practice was any good, because it seemed inevitable.  There was money to be made, too.  No one even considered that perhaps there was something better, until petroleum showed up.  This is why you see techno-futurists like Kevin Kelly, co-founder of Wired magazine, or author and futurist Ray Kurzweil always talk as if they can extrapolate our future in the digital world from observations of the past.  They pretend it's simple and like a computation to see into the future.  Kelly is also eager to explain that technological innovation is not the product of individual, human insight and genius but rather a predictable and normal process.  The great philosopher of science Karl Popper explained how technological innovation is intrinsically unpredictable.  But you can see that Kelly and folks like Clay Shirky (Here Comes Everybody, Cognitive Surplus) already see the future and already have concluded that humans have less and less to do with it, as digital technology gets smarter and smarter.  All these predictions and all those books sold (and real paper books, too!) would be wrong if someone just invented a better mouse trap, like people always do.  When petroleum became readily available all the whale-oil-predictions became silly and retrograde, almost overnight.

If you believe there are no Black Swans and things are moving in a direction, you don't like these comparisons (do you?).  But the real world is messy and technology is not smart in the way that human minds are, so we have to pretend if we want to predict the future that's described.  When everything is shallow (abstraction) and quick but limited (induction) you need something to grab onto to compensate, which is why we say all the computation will get "smarter."  If it doesn't, we're stuck pretending that shallow and quick is human culture.  That's too hard to do, eventually, which is why we have innovation and why the Atlantic Whalers eventually became obsolete and why we're due for some different digital designs than what we have now.  I have some thoughts on this, but that's the subject of another discussion.