Popular Posts

Friday, January 31, 2014

Limiting Results in Science

Nicolaus Copernicus published his magnus opus, De Revolutionibus Orbium Coelestium, in 1543, shortly before he died.  With it, a series of sea changes rippled through Western Europe, and in the relatively miniscule span of about two hundred years, with Isaac Newton's publication of the Principia Mathematica, the Scientific Revolution had transformed the western world.   Before Copernicus the average 16th Century European believed the Earth was at the center of the cosmos, and the universe was governed by teleological principles first elucidated by Aristotle thousands of years ago.  The fusion of Aristotelian cosmology and physics with the Judeo-Christian tradition in Scholastic thinkers like Saint Thomas Aquinas provided Western Europe with a universe filled with purpose and destiny in the coming of Christ, and in the artistic vision and genius of Dante the great story of the universe and our place in it found a common expression.  This was the world into which Copernicus published his heliocentric model of the universe.

From the beginning, though, the Copernican Revolution as it came to be called, was a strange fusion of religious vision and empirical science.  On the one hand, Copernicus realized that Tycho Brahe's cosmology was hopelessly convoluted.  Since Plato, astronomers had assumed that celestial orbits would follow perfect circles, because such Platonic Forms were more exalted and therefore were proper concepts for the description of the cosmos.  Yet, perfect circles hopelessly complicated geo-centric models.  Tycho Brahe's geo-centric model--the model that Copernicus realized was pointlessly convoluted--predicted the movements of heavenly bodies only with the aid of epicycles, equants, and other mathematical devices that were designed (somewhat ironically, as it turns out) to accommodate the many deviations from perfect circle orbits postulated in the model.  (Think of an epicycle as a smaller hoop affixed to a larger hoop, so that deviations from traversing the larger hoop can be explained by placing the traversing object somewhere on the smaller hoops orbit.)  Yet even with the addition of such fudge-factors in Brahe's geocentric model, limits to the prediction of celestial motion were a commonplace.  In short, the models, though complicated, were also frustratingly inaccurate.

A heliocentric model was virtually unthinkable at the time of Copernicus, however, as the Earth was imported a divinely special status--it was the planet, after all, where Jesus had lived and where it was thought that the Divine Plan of the entire universe was unfolding.  This observation about the barriers to doing science in the culture of the late Middle Ages in Europe, under the cloak of Catholicism, as it were, has formed part of the story of the Scientific Revolution ever since.  And, more or less, it's correct.  What's less appreciated, however, is that Copernicus himself felt divinely inspired; his religious views of the glory of the sun inspired or rther sustained his belief that the sun must be the center of the cosmos.  Only an object as glorified as the sun could fill such a role.  The Copernican Revolution then, the kick-off of what came to be called the Scientific Revolution, was a triumph of religious vision and fervor as much as empirical reality.

Copernicus was right, of course.  He needn't have held such elevated views of the sun.  He needn't necessarily have been infused with Greek neo-platonism or other-worldly thoughts at all.  His helio-centric model though carefully written to avoid conflict with the Catholic Church, would chip at the monolithic synthesis of religion and science under Scholasticism until a fissure formed, deepened with Galileo, and eventually split the entire intellectual world open with Newton.  After Newton, religion was divorced from science, and science was indeed liberated from the constraints of religious tenants and other non-empirical world views.  It was, indeed, a revolution.

But although early thinkers like Copernicus and even Newton were intoxicated with visions of a cosmos full of wonder and deity and immaterial reality (Newton once famously speculated that angels were partially responsible for gravitation, which he claimed he only described mathematically, and had not explained in any deeper sense), by the beginning of the 19th Century the flight from immaterial conceptions of the universe was nearly complete.  The philosopher and mathematician Rene Descartes laid much of the groundwork for what would become known as the "official" scientific worldview in the 19th Century:  Scientific Materialism.  There never was a consensus about the metaphysical presuppositions after the Scientific Revolution, but in practice the cultural and intellectual consequences of the revolution were a profoundly materialistic underpinning for Science, conceived now as a distinct and privileged activity apart from religion or the humanities.  Matter and energy was all that existed.  And to provide a conceptual framework for "matter and energy" was Descartes' life work.

Descartes himself was a theist, and the Cartesian conception of reality is known as Substance Dualism:  there are two basic substances, matter (or matter and energy), and an immaterial substance conceived of as mind or soul.  Everything fits into one of these two categories in the Cartesian framework.  Before, the "material world" was not separable from a mental realm completely.   []

Galileo's focus on primary properties like quantity, mass, and so on.  The other secondary properties were relegated to the Immaterial Realm.  Descartes would later attempt to prove, famously, that as God must exist (his "cogito, ergo sum") because the idea of God could not be doubted (and God would not deceive us), that therefore the human mind existed apart from the material world.  In practice, however, as science achieved impressive and ever growing mastery of knowledge about the world, the Immaterial Realm became less and less important, and implausible.  What we couldn't explain scientifically would end up in the immaterial realm.  But the progress of science seemed to suggest that such a strategy was a mere placeholder, for as the consequences of the Scientific Revolution were fully felt, even something as sacrosanct as the human mind would eventually yield to the march of science, and be explainable in purely material terms.  Hence, the original substantive division of body and mind -- material and immaterial -- tended to collapse into a monistic materialism.  Mind, it seemed, was a mere fiction, much like religious notions about the cosmos turned out to be after Copernicus.

Yet, once one half of the Cartesian framework is removed, the remaining Material Realm is relatively simplistic.  Whereas Aristotle and Greek thinking generally postulated secondary qualities like tastes and smells and colors as part of the purely physical world, along with rich conceptual structures like forms, the Caresian materialist framework was minimalist, and consisted only in a void full of atoms--uncuttables-- and the mathematics necessary to measure and count and explain the movements of these particles.  The universe in the Cartesian framework was suddenly dry, and simple, and, well, bleak.  Hence what began as a full throated metaphysical dualism capable of sustaining a belief in an infinite Deity ended in a simple view of materialism amenable to doing science as it was done by the great minds of the Revolution.  Mathematics, and matter, and all else was fiction.

By the nineteenth century, then, the philosopher and scientist Simon-Pierre Laplace could proclaim that "he had no need of that hypothesis" when confronted with questions about how God fit into science.  Science, which began in a maelstrom of broad and speculative metaphysics, in grand, exalted concepts of things had in the span of a hundred years adopted not only a distinct and often hostile stance opposite of Western religion (and in particular the Judeo-Christian tradition), but had eschewed the "mind" or immaterial half of the Cartesian framework, adopting the other, minimalist, half instead.

Yet, Cartesian materialism has proven remarkably fruitful over the years.  If we think of the universe as simply matter and energy, and go about observing it, formulating hypotheses expressible in mathematics, and confirming these hypotheses with experiments (ideally), we end up with modern science.

Are there any limits to scientific enquiry?  Yes, and in fact the actual practice of science exposes limits seemingly as much (or as significantly)

What Copernicus, Galileo, Newton, and the other heroes of the Scientific Revolution gave us were Progressive Theories--pieces of knowledge about the physical world that showed us, in a postive way how things went, and how we could explain them.

[bridge into discussion of limits]

The Puzzle of Limits

In a trivial sense, scientific knowledge about the physical world is always limiting.  The inverse square law specifies the force of gravity in Newtonian mechanics:  for two objects the gravitational force between them is inversely proportional to the square of their distance.  This is a limit in a trivial sense because gravity can't now be described using some other equation (or any equation); it's not, for instance, inversely proportional to the triple of their distance.  But nothing really turns on this notion of limits, and indeed the very point of science is to find actual laws that govern the behavior of the world, and these laws will have some definite mathematical description or other.  When we say that pressure is related to volume in Boyle's Law, for instance, we don't feel we have a law until we've expressed the relationship to gas pressure and volume as a specific equation, which necessarily precludes other, different, equations.  All of this is to say we can dispense with the trivial notion of limits.

What's more interesting are cases where scientific investigation reveals fundamental limits to our knowledge of certain phenomena in the world.  Like with the Newton's Inverse Square Law or Boyle's Law for gases, we've isolated a physical systems and described it's causal or law-like behavior in mathematical terms (algebraic equations in the two examples), but once we have this correct account, it turns out that there are inherent limitations to how we can use this knowledge to further explain or predict events or outcomes in the system.  The system itself, one might say, once correctly described in the language of science, has characteristics that prevent us from knowing what we wish to know about it, or using the knowledge we do have in certain desired ways.

The first major limiting result in this non-trivial sense probably came from Maxwell's work in thermodynamics in the 19th century.  Entropy, as it came to be known, is perhaps the ultimate limiting result.

The 19th Century had other surprises,  Henri Poincare, the great French mathematician, proved that the famous "Three Body Problem" was unsolvable, and in so doing anticipated much of the modern field of Chaos Theory.  The Three Body Problem states that ...

By comparison to the 20th Century, however, limiting results emerging from work in the 19th Century have been tame.  Two major 20th Century advances--one in physics and the other in mathematics--have ushered in sea changes to modern science that have greatly altered our Enlightenment notion of the nature and limits of science.  In physics, Heisenberg's Uncertainty Principle demonstrated that at the quantum level, we can't isolate the position and momentum of a particle simultaneously.  To get arbitrary precision of a particle's position, we necessarily change it's momentum, and likewise isolating the momentum of a subatomic particle limits our ability to pinpoint its position.  The Uncertainty Principle thereby established that as scientific investigation turns to the "really small", or subatomic scale of the universe, there are boundaries to our knowledge of physics.  It's important to note here that the Uncertainty Principle is not provisional, a result based on current limits to technology or to the state of physics in the early 20th century.  Rather, it's a valid result in general; it holds for any measurement of subatomic phenomena, anywhere, at any time.  It's a fundamental limit that we've discovered about the nature of our world, when we turn our investigation to subatomic scales.

Yet, for all the humbling implications of Heisenberg's principle, it helped launch modern quantum mechanics.  As is often the case, discovering what we can't know ends up more fruitful for science than discoveries about what we can.  Armed with the Uncertainty Principle, scientists were able to frame hypotheses and investigations into the nature of quantum phenomena and further develop the statistical framework of modern quantum mechanics.  Indeed, the notion that deterministic knowledge isn't fully possible in subatomic realm, and thus that a statistical distribution of possible outcomes must be provided, is one of the key insights of the new physics.  Not knowing our limitations, the statistical framework for quantum mechanics may not have fit into place so rapidly and intuitively as it did in the last century.  Again, limiting results in science have proven art of the backbone of progress in science, however paradoxical this may seem.

To wrap our minds around the significance of the productivity of limiting results in science, we might employ some metaphors.  Call the "limitless progress" assumptions undergirding the Scientific Revolution and 19th century scientific materialism (with Mach and others) Limitless Science.  A metaphor for Limitless Science will be something constructive, evoking the notion of continual progress by building something.  We might conceive of the results of Limitless Science as a crisscrossing of roads and infrastructure on a smooth round sphere (like the Earth, say, but without land marks like canyons or mountains that obstruct road-building).  To get to any location on the sphere of Limitless Science, you simply plot out the distance, allow for conditions like rain or say hills or sand or swamps, and lay out your roadway.  Each time a road is built, another location is accessible on the globe.  Continue in this way and eventually anyone can get anywhere on Limitless Science planet.  (To avoid having every square inch of the planet covered in roadways, we might stipulate that places within a bike ride or a walk from some road don't need their own roads.)

Beginning with the Second Law of Thermodynamics and moving through the 19th century to Poincare's insight into the chaotic behavior of complex systems on up through the 20th century we see that the limiting results stand in stark contradistinction to Limitless Science.  In fact, we'll need a different metaphor to visualize scientific progress in this world.  Our globe of ever-increasingly roadways doesn't capture the fact that many times our inroads to results end up in dead-ends.  So, by contrast, Limiting-Discovery Science isn't a pristine spherical object where any road (theory) can reach any destination, but rather obstructions like a Grand Canyon or a raging river, or a range of impassable mountains dot the landscape.  Building roads on Limiting-Discovery Planet is not a matter of plotting a straight line from a beginning point to a destination, but rather in negotiating around obstructions (limiting results) to get to final destinations.  We can have fun with this metaphor:  if Heisenberg's Uncertainty Principle is a Grand Canyon to be navigated around, then say the Second Law of Thermodynamics is the Himalayas, and Chaos Theory is the Pacific.  The point here is that scientific investigation discovers these impassable landmarks and our knowledge of the world then proceeds along roads we've engineered as detours in light of these discoveries.  And, to push the metaphor a bit, we find out things we can't do--places we can't go--unlike our Limitless Science globe, with its smooth, traversable surface.  It's no use building a road through the Grand Canyon, or over Mount Everest.  The features of this world eliminate options we thought we had, before the discoveries.  Likewise, of course, with scientific discovery itself.

To expland on this point a bit more, what's interesting about the metaphor is it helps us see that, in a way, every truth we discovery about the world around us is fruitful and progressive.  Discovering the Grand Canyon is a landmark of the Southwestern United States is only limiting if we'd assumed that our human ambitions to build roads all over our world would never be frustrated, by "facts on the ground" so to speak.  But scientific investigation is in the end about discovering truths, and these truths are fruitful even when limiting (when we first assume perfect, linear progress) because knowing how the world really is, is bound to be fruitful.  If you can't, after all, drive across the Grand Canyon, it's fruitful to know that fact.  You can then build a system of roads that skirts around it, and you're on your way.  Similarly with scientific discovery, when we realize we can't, for instance, isolate with arbitrary precision the position and momentum of a subatomic particle simultaneously, this knowledge about the observational limits of subatomic phenomena paves the way for a formulation of quantum mechanics in terms of statistics and probability, rather than causal laws that presuppose knowledge of impossibilities, and such results generate successful predictions elsewhere, along with technological and engineering innovations based on such results.  We learn, in other words, what we can do, when we discover what we can't.  And so it is with science, just as in other aspects of our lives.

This brings us to our next major point, which is that, unfortunately, scientific materialists hold metaphysical assumptions about the nature of the world that tend to force Limitless Science types of thinking about discovery.  If the entire universe is just matter and energy--if Physicalism is true, in other words--then every impossibility result emerging from scientific investigation is a kind of failure.  Why?  Because there's nothing mystical or immaterial about the universe, anywhere, and so one naturall assumes the sense of mystery and wonder will gradually give way as more and more physical knowledge is accumulated.  If Chaos Theory tells us that some physical systems exhibit a sensitive dependence on initial conditions such that long-range prediction of events in these systems is effectively impossible, this means only that with our current differential techniques (say, Lavier-Stokes equations for fluid dynamics) we have some limits in those types of systems.  Since there's nothing much going on but unpredictability from the properties of chaotic systems, there's sure to be some advances in our ability to build roads that will gradually whittle away the limitations here.  And to the extent that this isn't fully possible, it expresses only a fact about our limited brains, say, or the limits of computation given the complexity of the world.

To put all this another way, scientific materialists are committed to seeing limiting results in science either as placeholders until better methods come around, or as lacunae in our own noetic capabilities.  You might say this is the "we're too primitive" or "we're too stupid" response to limiting results.  What is manifestly not possible with a materialist presupposition is that the limits point to real boundaries in our application of physical concepts to the world.

Roughly, there are two two possibilities that materialists will ignore when confronted with Grand Canyons or the Himalayas on Limiting-Discovery Planet.  One, the Cartesian materialism presupposed in science since the Enlightenment might be wrong or incomplete, so that some expanded framework for doing science is necessary.  Two, there may be immaterial properties in the universe.  In this latter case, the reason we can't lay roadwork down through the parts of Arizona that intersect with the Grand Canyon is simply because there's no physical "stuff" there to work with; the Grand Canyon is a metaphor for something that is not reducible to matter and energy.  This is an entirely possible and even reasonable response when contemplating the meaning of the limiting results (i.e., we're up against a part of the universe that isn't purely material, which is why material explanations fall short), but they won't be entertained seriously by scientific materialists.  Again, since all of the universe is just assumed to be matter and energy (and we have historical, roughly Cartesian accounts of matter and energy, to boot), limiting results will always end up as commentary about humans-when-doing-science (that we're either too primitive still to get it, or just too stupid, which is close to the same idea sans the possibility of future progress).

But we can reign all this philosophical speculation in, for the moment (though we'll have to return to it later).   We've left out maybe the most interesting limiting result not only of the last century, but perhaps ever.  It's arguably the most fruitful, as well, as its publication in 1931 led step by step to the birth of modern computation.  This is rather like discovering the Grand Canyon, reflecting on it for a while, kicking around in the hot desert sands, and then realizing a design for an airplane.  The result is Godel's Incompleteness Theorems.  As its the sin qua non of our thesis--limiting results resulting in fruitful scientific research--we'll turn to it in some detail next.

[Godel's Theorem, Halting Problem, modern computation, computational complexity versus undecideability, AI]

 AI-A Giant Limiting Result

Yet, the lessons of science seem lost on the Digital Singularity crowd.  The apposite metaphor here is in fact the one we rejected in the context of actual science--Limitless Science Planet.  Everything is onward and upward, with progress, progress, progress.  Indeed futurists and AI enthusiast Ray Kurzweil insists that the lesson of the last few hundred years of human society is that technological innovation is not only increasingly but exponentially increasing.  Such a nose thumbing take on intellectual culture (including scientific discovery, of course, which drives technological innovation) excludes any real role for limiting results, and suggests instead the smooth, transparent globe of Limitless Science.  Indeed, as we build roads we become more and more capable of building better roads more quickly.  In such a rapidly transfiguring scenario, it's no wonder that Kurzweil and other Digital Singularity types expect Artificial Intelligence to "emerge" from scientific and technological progress in a few years time.  It's assumed--without argument--that there are no Grand Canyons, or Mount Everests, or Pacific Oceans to fret about, and so there's nothing theoretical or in principle that prevents computation--networks of computers, say--from coming alive into a super intelligence in the near future (we get the "near" part of "near future" from the observation that technological innovation is exponentially increasing).  This is Limitless Science at its finest.

But, in general, we've seen that Limitless Science isn't true.  In fact, there are lots of features of the actual world that the accretion of scientific knowledge uncovers to be limiting.  And indeed, the history of science is replete with examples of such limiting results bearing scientific and technological fruit.  The actual world we discover using science, in other words, is vastly different than Digital Singularitists assume.  It's time now to return to the question of whether (a) an expanded framework for science or (b) an actual boundary to materialism in science is required.  But to do this, we'll need to tackle one final limiting result, the problem of consciousness.

Tuesday, January 28, 2014

Prolegomena to a Digital Humanism

"What makes something fully real is that it's impossible to represent it to completion."  - Jaron Lanier

The entire modern world is inverted.  In-Verted.  The modern world is the story of computation (think:  the internet), and computation is a representation of something real, an abstraction from real particulars.  The computation representing everything and connecting it together on the internet and on our digital devices is now more important to many of us than the real world.  I'm not making a retrograde, antediluvian, troglodyte, Luddite point; it's deep, what I'm saying.  It's hard to say clearly because my limited human brain doesn't want to wrap around it.  (Try, try, try -- if only some computation would help me.  But alas.)

The modern world is inverted to the extent that abstractions of reality become more important than the real things.  A computer representation of an oil painting is not an oil painting.  Most people think the representation is the wave of the future.  The oil painting is, actually.

What's a computer representation of a person?  This is the crux of the problem.  To understand the problem we have to understand two big theory problems here, and I'll be at some pains to explain them.  First, suppose I represent you -- or "model" you, in the lingo -- in some software code.  Suppose for example I model all the employees working at a company because I want to predict who fits best for a business project happening in, say, Europe (It's a big corporation with huge global reach and many business units, like IBM.  No one really knows all the employees and who's qualified for what, except locally perhaps.  The example is real-world).  That necessarily means I have a thinner copy of the "real" you-- I may not know you at all, so I'm abstracting away some data stored in a database about you--your position, salary, latest performance reports, the work you do, a list of job skills.  Because abstractions are simplified models of real things, they can be used to do big calculations (like a database operation that returns all the people who know C++); it also means they leave out details.  Abstractions conceived of as accurate representations are a lie, to put it provocatively, as the philosopher Nietzsche remarked once (He said that Words Lie.  You say "this is a leaf", pointing to a leaf.  There is no such thing as a "leaf" as an abstract concept.  There are only leaves...).

All representations are in a language, and every language has limits to its expressiveness.  Natural language like English is most expressive, which is why a novel or a poem can capture more about the human experience than mathematics can, or computer code.  This point is lost on many Silicon Valley 'Singularity' types--technologists and futurists who want computation to replace the messy real word.

Change the example if you want, because abstracting the real world and especially human behavior into slick computer models is all the rage today.  Examples abound.  Say I go shopping at the same store.  I shop at Safeway.  Whenever I go to Safeway, I get a bunch of coupons when I check out at the self check out.  The coupons are strangely relevant--I get deals on protein bars and chocolate milk and so on.  Funny thing is that I buy all those items, but I didn't necessarily buy any of the coupon items when I received those coupons.  What's happening here is that Safeway has made a model of "me" in its databases, and it runs some simple statistics on my purchases as a function of time (like:  month by month by item type, say), and from this data it makes recommendations.  People like this sort of service, generally.  You talk to technologists and the people who're modernizing the consumer experience and you'll get a vision of our future:  walk into the supermarket and scan your ID into the cart.  It starts directing you to items you need, and recommending other items "I noticed you bought tangerines the other day, you might like tangelos too.  They're on sale today on Aisle 5."

Now, nothing is wrong here, it's just a "lie" of sorts is all.  I'm generally not my Safeway model is all.  Not completely.  The model of me that Safeway has is based on my past buying patterns, so if I change anything or if the world changes, it's suddenly irrelevant and it starts bugging me instead of helping me.  It's a lie, like Nietzsche said, and so it gets out of sync eventually with the actual me that's a real person.  I don't buy chocolate but on Valentines Day I do, say.  Or I'm always buying ice cream but last week I started the Four Hour Body diet and so now I only buy that on Saturdays, and I buy beans all the time now.  But right when I get sick of them and start buying lentils the system has a good representation of me as a bean-buyer, so now I'm getting coupons for beans at precisely the time I'm trying to go no-beans (but I'm still into legumes).  Or I'm running errands for someone else, who loves Almond Milk.  Almond Milk is on sale but I don't get that information; I only see 2% Lactose Free milk is on sale because I usually buy that.  The more the model of me is allowed to lord over me, too, the worse things get.  If the cart starts pulling me around to items that I "need", and it's wrong, I'm now fighting with a physical object -- the shopping cart -- because it's keeping me from buying lentils and Almond Milk.  None of this has happened yet, but welcome to creating mathematical objects out of real things.  The computer can't help with any of my buying behavior today, because it's got a stale, simple model of me based on my buying behavior yesterday.  That's how computers work.  (Would it be a surprise to learn that the entire internet is like this?  I mean:  a shallow, stale, simple model of everything?  Well, it is.  Read on.)

Let's finish up with abstraction.  Someone like Mathew Crawford, who wrote the best-selling "Shop Class as Soul Craft", dropped out of a six figure a year job at a think tank writing about politics in D.C. to fix motorcycles, because when he realized the modern world is inverted, and abstractions are becoming more important than real things and experiences, he was desperate to find something meaningful.  He wasn't persuaded, like the Silicon Valley culture seems to be, that all these abstractions are actually getting smarter and smarter and making us all better and better.  He opened a motorcycle repair shop in Virginia and wrote a book about how you can't rely on abstractions and be any good at all fixing real things like motorcycles.

This is an interesting point, actually.  Crawford's an interesting guy.  You could write a dissertation alone on how difficult it is to diagnose and fix something complicated.  You can download instructions and diagnostics from the internet, but you're not a real mechanic if you don't feel your way through the problem.  Computation is supposed to replace all of this embarrassing human stuff like intuition and skill and judgement.  "Feeling our way" through things is supposed to be in the past now, and really the pesky "human element" is supposed to go away too.

A confusion of the modern inverted age is that as computers get smarter (but they don't, not like people do), we're supposed to get smarter and better, too.  But all this sanguine optimism that everything is getting "smarter" disguises the truth, which is that it's impossible for us to get "smarter" by pretending that computers are smarter--we have to choose.  For example, if we pretend that abstractions are "smart", we have to fit into them to continue the illusion.  If we start imposing the messy reality of life onto things, the abstractions will start looking not-so-smart, and then the entire illusion is gone.  Poof!  To the extent that we can't handle exposing our illusions, we're stooping down to accommodate them.  All this becomes clear when you open a motorcycle repair shop and discover that you have to feel your way through the problem and abstractions of fixes don't really help.

So much for Crawford.  There are so many Crawford's today, actually.  I think it's time to start piecing together the "counter-resistance" to what Lanier calls the Digital Maoists or Cybernetic Totalists--the people saying that the abstractions are more real and smart than what's actually real and smart.  The people saying the human element is old news and not important.  The people saying that the digital world is getting smarter and coming alive.  If it sounds crazy (and it should), it's time to start pointing it out.

I can talk about Facebook now because I don't like Facebook at all and almost everyone I know seems to be obsessed with it.  (It makes me reluctant to complain too much or when I'm aggravated it emboldens me to complain with a kind of righteous indignation.)  Facebook is a model of you for the purposes of telling a story about you.  Who is reading the story and why?  On the internet this is called "sharing" because you connect to other models called "friends" and information about your model is exchanged with your Friend-Models.  Mostly, this trickles down to the actual things--the people--so that we feel a certain way and receive a certain satisfaction.  It's funny that people who use Facebook frequently and report that they have many friends on Facebook also report greater degrees of loneliness in the real world.  Which way does the arrow of causality go?  Were they lonely types of people first?  Or does abstraction into shallow models and using emotional words like "friends" and "social" make their actual social existence worse somehow?

I don't think there's much wrong with a shallow Facebook model of me or you, really.  Facebook started out as a way to gawk at attractive nineteen year old Harvard women, and if you want to do this, you need an abstraction that encourages photo sharing.  I don't necessarily want this experience to be deep either.  I don't want three hundred friends to have a deep model of me online, necessarily, either.

Theoretically, though, the reason Facebook models are shallow is the same reason that Safeway only wants my buying behavior in my Supermarket Model.  Since "Facebook" is really a bunch of servers (a "server" is a computer that services other computers--it's a computer is all), then what the real people who own Facebook can do is determined by what Facebook computers can do, with our models.  Since computers are good at doing lots of shallow things quickly (think the Safeway database), why would Facebook have rich models of us?  Then they couldn't do much with them.  It's an important but conspiratorial-sounding point that most of what Facebook wants to do with your Facebook model, connected to your Facebook friend models, is run statistics on what ads to sell you.  It's another significant but bomb-shell type of observation here that all the supposed emerging smartness of the World Wide Web is laser-focused on targeted advertising.  All this liberation we think we feel is really disguising huge seas of old-fashioned persuasion and advertising.  Because everything we get online is (essentially) free -- think Facebook -- it's no wonder that actual money will be concentrated on ads.  (Where does the the actual earned money come from still, to buy the advertised goods and services?  That gets us back into the messy real world.)

So much for abstraction.  Let's say that abstraction is often shallow, and even vapid.  It's incomplete.  This says something true.  It means that we ought to think of computation as a shallow but convenient way to do lots of things quickly.  We shouldn't confuse it with life.  Life here includes the mystery of human life:  our consciousness, our thoughts, and our culture.  We confuse abstractions with the fullness of life at our peril.  It's interesting to ask what vision of digital technology would support a better cultural experience, or whether shallow, infantile, ad-driven models are the best we can do.  Maybe why we cheer lead so loudly for Facebook and the coming "digital revolution" is because we think it's the only way things can turn out, and it's better than horses and buggies.  This is a really unfortunate way to think about technology and innovation I would say...

The second way the modern world is inverted-- the second theoretical problem with treating computer models as reality -- is known as the Problem of Induction (POI).  Someone like former trader Nasam Nicholas Taleb describes the POI as the problem of the Black Swan.  Most swans -- the vast majority of swans -- are white, so eventually you generalize in your code or your database or your mind to think something like "All swans are white."  Taleb calls this a Gaussian distribution (or normal distribution) because you don't expect there to be outliers that screw everything up.  Taleb says that sometimes the real events in the world are not so much like Gaussian distributions but are like exponential ones.  He calls this the Black Swan phenomenon.  It's tied to the ancient POI as I'll explain.  I mean:  when a Black Swan shows up and we all thought "All swans are white."

We'll say first that a Gaussian or normal distribution is like the height of people in the real world.  Most people are between 5 and 6 feet tall, the vast majority in fact.  This is a normal distribution.  It's rare to have a 7' guy or a 4' one, and essentially impossible to have a 9' one or a 3' one.  If human height was like an Exponential Distribution, though, or a "Black Swan", then occasionally there'd be a guy that was a hundred feet tall.  He'd be rare, but unlike with the Gaussian, he'd be guaranteed to show up one day.  This would screw something up no doubt, so it's no wonder that we prefer the Gaussian for most of the representing we do of the actual world.

Taleb explains, however, that when it comes to social systems like the economy, we unfortunately get Black Swans.  We get 100 feet tall people occasionally, or in other words we get unpredictable market crashes. We can't predict when they'll happen, he says, but we can predict that they'll come around eventually, and screw everything up.  He says that when we create shallow abstractions of real economic behavior like with credit default swaps and derivatives and other mathematical representations of the real world, we are guaranteed to get less predictable behavior and really large anomalies (like 100 feet tall people).  So he says that the economy is not Gaussian.

All of this is well and good but all the computer modeling is based on Gaussian principles.  This is what's called Really Bad, because we're relying on all that modeling, remember.  It means that as we make the economy "digital" with shallow mathematical abstractions (like default swaps), we also make it more of a "lie", in so far as the Black Swan feature will tend to get concealed in the layers of Gaussian computation we're using to make money.  All the money is made possible when we get rid of the rich features of reality, like actual real estate, and digitize it.  If we know that, sooner or later, we're guaranteed to lose all the money we've made, because the future behavior of these systems contains a Black Swan, but our computer models assure us that the swans are all white, do we care?  As long as we make the money now, maybe we don't.  If we know we're getting lonely on Facebook but we still have something to do at night with all of our representations of friends, do we care?  It takes some thought to figure out what we care about and whether we care at all.  (It's interesting to ask whether we start caring, as a rule, only after things seem pretty bad.)

This is the case with the economy, it seems.

So the second big theory problem with the inverted modern world is that computation is inductive.  This is a fancy way of saying that the Safeway database cannot figure out what I might like unless it's based on what I've already proven I like.  It doesn't know the real me, for one.   It knows the abstraction.  And even more importantly, because computation is inductive, it must always infer something about me or my future based on something known about me and in my past.  Human thought itself is partly inductive, which is why I'll expect you to show up at around 5 pm at the coffee shop on Thursdays, because you always do.  But I might also know something about you, like say that you're working at 5.

Knowing that you're working at 5 on Thursday is called "causal knowledge", because I know something about you instead of just the past observations of you showing up.  I have some insight about you.  It's "causal" because if you work at 5 on Thursday, that causes you to be there regardless of whether you've shown up in the past.  It's a more powerful kind of knowledge about you.  We want our computers to have insights like this but really, they are more at home with a database full of entries about your prior arrivals on Thursdays at 5.  The computer really doesn't care or know why you show up.  This is induction.

Induction applies to the Black Swans in stock market crashes because we were all thinking that "All swans were white" based on our computer models of the past.  Those models were wrong, it turns out, so we didn't see the Black Swan coming.  If we hadn't been convinced the computer models were so smart, we might have noticed the exponential properties of the system.  Or:  we might have noticed the inherent, real world volatility that we were amplifying by abstracting it, and relying on inductive inferences instead of causal knowledge or insight.  Computers are very good at convincing us we're being very smart about things by analyzing all those huge data sets from the past.  When something not in that past shows up, they're also very good at making things become chaotic.  This is a reminder that the real world is actually in charge.

It's very complicated to explain why computers don't naturally have the "insight" or "causal knowledge" part of thinking that we do (and why they can't really be programmed to have it in future "smarter" versions either).  Generally Artificial Intelligence enthusiasts will insist that computers will get smarter and eventually will have insights that predict the Black Swans (the very ones they've also made possible).  In general however the Problem of Induction, which is a kind of blind spot (to go along with the "lie" of abstraction) is part and parcel of computation.  If you combine this inductive blindness with the shallowness of the models, you get a world that is really good at doing simple things quickly.  If you question whether this is our inevitable future, and whether perhaps there are entirely new vistas of human experience and culture available to us (including the technology we make), I think you're on the right track.

Here is a representation of me: 42 73 1 M.  What does it mean?  I once used something called "log linear" modeling to predict who would pay traffic tickets in data provided by the state of Iowa (true). We used the models of hundreds of thousands of people with database entries like this example, but more complicated, to predict those with greater than some number n likelihood to never pay.  Then we recommended to the state of Iowa not to bother with these people.  It worked pretty well, actually, which is why we make shallow representations for tasks like this...

What's funny about technologists is how conservative they are.  A hundred fifty years ago, the technologists were passionately discussing the latest, powerful methods for extracting whale oil from the blubber of Sperm and Baleen whales harpooned and gutted against the wooden hulls of Atlantic Whalers.  No one stopped to wonder whether the practice was any good, because it seemed inevitable.  There was money to be made, too.  No one even considered that perhaps there was something better, until petroleum showed up.  This is why you see techno-futurists like Kevin Kelly, co-founder of Wired magazine, or author and futurist Ray Kurzweil always talk as if they can extrapolate our future in the digital world from observations of the past.  They pretend it's simple and like a computation to see into the future.  Kelly is also eager to explain that technological innovation is not the product of individual, human insight and genius but rather a predictable and normal process.  The great philosopher of science Karl Popper explained how technological innovation is intrinsically unpredictable.  But you can see that Kelly and folks like Clay Shirky (Here Comes Everybody, Cognitive Surplus) already see the future and already have concluded that humans have less and less to do with it, as digital technology gets smarter and smarter.  All these predictions and all those books sold (and real paper books, too!) would be wrong if someone just invented a better mouse trap, like people always do.  When petroleum became readily available all the whale-oil-predictions became silly and retrograde, almost overnight.

If you believe there are no Black Swans and things are moving in a direction, you don't like these comparisons (do you?).  But the real world is messy and technology is not smart in the way that human minds are, so we have to pretend if we want to predict the future that's described.  When everything is shallow (abstraction) and quick but limited (induction) you need something to grab onto to compensate, which is why we say all the computation will get "smarter."  If it doesn't, we're stuck pretending that shallow and quick is human culture.  That's too hard to do, eventually, which is why we have innovation and why the Atlantic Whalers eventually became obsolete and why we're due for some different digital designs than what we have now.  I have some thoughts on this, but that's the subject of another discussion.