Popular Posts

Saturday, May 4, 2013

Zen and the Art of Staring at Brains

     Neuroscience is exciting, and frustrating, business for practitioners of Artificial Intelligence (AI), and other fields like cognitive science dedicated to reverse engineering the human mind by studying the brain. Leibniz anticipated much of the modern debate centuries ago, when he remarked that if we could shrink ourselves to microscopic size, and "walk around" inside the brain, we would never discover a hint of our conscious experiences. We can't see consciousness, even if we look at the brain. The problem of consciousness, as it is known to philosophers, is a thorny problem that unfortunately has not yielded its secrets even as techniques for studying the brain in action have proliferated in the sciences. Magnetic Resonance Imaging(MRI), functional MRIs (fMRI), and other technologies give us detailed maps of brain activity that have proven enormously helpful in diagnosing and treating a range of brain-related maladies, from addiction to head trauma to psychological disorders. Yet, for all the sophistication of modern science, Leibniz's remarks remain prescient. If the brain is the seat of consciousness, why can't we explain it in terms of the brain?
    When I was a Ph.D. student at Arizona in the late 1990s, many of the philosophic and scientific rock stars would gather at the interdisciplinary Center for Consciousness Studies and discuss the latest theories on consciousness. While DNA co-discoverer Francis Crick declared in his The Astonishing Hypothesis that "a person's mental activities are entirely due to the behavior of nerve cells, glial cells, and the atoms, ions, and molecules that make them up and influence them", scientists like Kristof Koch of CalTech pursued specific research on "the neural correlates of consciousness" in the visual system, and Stuart Hameroff, along with physicist Roger Penrose, searched for the roots of consciousness in still more fundamental levels, in the quantum effects of the so-called microtubules in our brains. Philosophers dutifully provided the conceptual background, from Paul and Patricia Churchland's philosophic defense of Crick's hypothesis in eliminativism--the view that there is no problem of conciousness because "consciousness" isn't a real scientific object of study (it's like an illusion we create for ourselves, with no physical reality)--to David Chalmers' defense of property dualism, to the "mysterians" like Colin McGill, who suggested that consciousness was beyond understanding. We are "cognitively closed" to certain explanations, says McGill, like a dog trying to understand Newtonian mechanics.
     Yet with all the great minds gathered together, solutions to the problem of consciousness were in short supply.  In fact the issues, to me anyway, seemed to become larger, thornier, and more puzzling than ever. I left Arizona and returned to the University of Texas at Austin in 1999 to finish my master's degree (I was a visiting grad student at Arizona), and I didn't think much more about consciousness--with all those smart people drawing blanks, contradicting each other, and arguing over basics like how even to frame the debate, who needed me? (Ten years later I checked in with the consciousness debate to discover, as I suspected, that the issues on the table were largely unchanged. As Fodor once put it, the beauty of philosophical problems is that you can leave them sit for decades, and return without missing a step.) My interest anyway was in human intelligence, not consciousness.
     Intelligence. Unlocking the mystery of how we think was supposed to be a no brainer (no pun). Since Turing gave us a model of computation in the 1940s, researchers in AI began proclaiming that non-biological intelligence was imminent. Fifty years ago Nobel laureate and AI expert Herbert Simon won the prestigious A.M. Turing Award as well as the National Medal of Science. That didn't stop him from declaring in 1957 that "there are now in the world machines that think, that learn and that create", and later to prognosticate in 1965 that: "[By 1985], machines will be capable of doing any work Man can do." In 1967, AI luminary Marvin Minsky of MIT predicted that 'within a generation, the problem of creating "artificial intelligence" will be substantially solved'.
     Of course, none of this happened. Even stalwart visionaries have their limits, and so today you'll hear nary a whimper about the once crackling project of programming a computer to think like a person. Something was missed. Today, the neuroscience model is all the rage. Apparently what was wrong with "GOFAI" or "Good Old Fashioned AI" was precisely what was once touted as its strength--we can't ignore how our brains work in understanding intelligence, because brains are the only things that we know can. Programming intelligent robots from armchair principles and theories, well this was bound to fail.
     Enter Modern AI, which has essentially two active programs. On the one hand, as I've mentioned, we want to reverse-engineer intelligence in artificial artifacts by studying the brain. On the other, we can view intelligence as an emergent phenomenon of complex systems. Since the actual "stuff" of the complex system isn't the point, but rather its complexity, in principle we might see intelligence emerge from something like the World Wide Web. I'm not sure how seriously to take the latter camp, as it seems a little too glib to take our latest technological innovations and (again and again, in the history of technology) declare them the next candidate for AI. Let's set this aside for now. But the former camp has a certain plausibility to it, and in the wake of the failure of traditional AI, what else do we have? It's the brain, stupid. 
    Well, it is, but it's not. Mirroring the consciousness conundrums, the quest for AI now anchored in brain research appears destined to the same hodge podge, Hail Mary! type theorizing and prognosticating as consciousness studies. There's a reason for this, I think, which is again prefigured in Leibniz's pesky comment. To unpack it all, let's look at some theories popular today.
     Take Jeff Hawkins. Famous for developing the Palm Pilot and an all around smart guy in Silicon Valley, Hawkins dipped his toe into the AI waters in 2004 with the publication of his On Intelligence, a bold and original attempt to summarize the volumes of neuroscience data about thinking in the Neo Cortex with a hierarchical model of intelligence. The Neo Cortex, Hawkins argues, takes input from our senses and "decodes" it in hierarchical layers, with each higher layer making predictions from the data provided by a lower, until we reach the top of the hierarchy and some overall predictive theory is synthesized from the output of the lower layers. His theory makes sense of some empirical data, such as differences in our responses based on different types of input we receive. For "easier" predictive problems, the propagation up the Neo Cortex hierarchy terminates sooner (we've got the answer), and for tougher problems, the cortex keeps processing and passing the neural input up to higher, more powerful and globally sensitive layers. The solution is then made available or passed back to lower layers until we have a coherent prediction based on the original input.
     Hawkins has an impressive grasp of neuroscience, and he's an expert at using his own innovative brain to synthesize lots of data into a coherent picture of human thinking. Few would disagree that the Neo Cortex is central to any understanding of human cognition, and intuitively (at least to me) his hierarchical model explains why we sometimes pause to "process" more of the input we're receiving in our environments before we have a picture of things--a prediction, as he says. He cites the commonsense case of returning to your home and having the doorknob moved a few inches to the left (or right). The prediction has been coded into lower levels because benefit of prior experience has made it rote, we open the door again and again, and the knob is always in some one place. So when it's moved slightly, Hawkins claims, the rote prediction fails, and the cortex sends the visual and tactile data further up the hierarchy, until the brain gives us a new prediction (which in turn will spark other, more "global" thinking as we search for an explanation, and so on).
     Ok. I'll buy it as far as it goes, but the issue with Hawkins, like with the consciousness debates, is that it doesn't really "go". How exactly are we intelligent, again? For all the machinery of a hierarchical model of intelligence, "intelligence" itself remains largely untouched. Offering that it's "hierarchical" and "located in the Neo Cortex" is hardly something we can reverse engineer, any more than we can explain the taste of a fine red wine by pointing to quantum events in microtubules. "So what?" one might say, without fear of missing the point. To put it another way, we don't want a brain-inspired systems-level description of the black box of human intelligence--how we see what is relevant in complex, dynamic environments--we want a systems description of how the box itself works. What's in the box?  That's the theory we need, but it's not what we get from Hawkins, however elaborate the hierarchical vision of the cortex might be.
     To prove my point, we might put "our money where our mouth is" as they say, and take his entire theory and code it up in a software system that reproduces exactly the connections he specifies. What will happen? Well, the smart money is "not much", because such systems have in fact been around for decades in computer science (hierarchical models of inference, etc.), and we already know that the details at the level of systems don't provide the underlying juice--whatever it is--to actually reproduce thinking. If they did, the millions of lines of code generated from every imaginable conceptualization of intelligent machine thinking would have hit on it. Something more is going on than systems thinking. (As it turns out, Hawkins himself has largely proved my point. He launched a software company providing predictions of abnormal network events for security purposes using computational models inspired by his research on the Neo Cortex. Have you heard of the company? Me neither, until I read his web page. Not to be cruel, but if he decoded human intelligence in a programmable way, the NASDAQ would have told us by now.)
     I'm not picking on Hawkins. Let's take another popular account, this time from a practicing neuroscientist, all around smart guy David Eagleman. Eagleman argues in his 2012 Incognito that the brain is a "team of rivals", a theory that mirrors AI researcher Marvin Minsky's agents-based approach to reproducing human thought in his Society of Mind (1986), and later The Emotion Machine (2006). The brain reasons and thinks, claims Eagleman, by proposing different interpretations of sense data from our environment, and through refinement and checking against available evidence and pre-existing beliefs, allowing the "best" or "winning" interpretation to win out. Different systems in the brain provide different pictures of reality, and the competition among these systems yields stable theories or predictions at the level of our conscious beliefs and thoughts.
    This is a quick pass through Eagleman's ideas on intelligence, but even if I were to dedicate several more paragraphs of explanation, I hope the reader can see the same problem up ahead. One is reminded of Daniel Dennett's famous quip in his Cognitive Wheels article about artificial intelligence, where he likens AI to explanations of a magic trick:
                 "It is rather as if philosophers were to proclaim themselves expert explainers of the methods of a stage magician, and then, when we ask them to explain how the magician does the sawing-the-lady-in-half-trick, they explain that it is really quite obvious:  the magician doesn't really saw her in half; he simply makes it appear that he does.  'But how does he do that?' we ask.  'Not our department', say the philosophers--and some of them add, sonorously:  'Explanation has to stop somewhere.'

But the "team of rivals" explanation has stopped, once again, before we've gotten anywhere meaningful.  Of course the brain may be "like this" or may be "like that" (insert a system or model); we're searching for what makes the systems description work as a theory of intelligence in the first place.  "But how?" we keep asking (echoing Dennett).  Silence.

  Well, not to pick on Eagleman, either. I confess that I thoroughly enjoyed his book, at least right up to the point where he tackles human intelligence. It's not his fault. If someone is looking at the brain to unlock the mystery of the mind, the specter of Leibniz is sure to haunt them, no matter how smart or well informed. The issue with human intelligence is not the sort of thing that can be illuminated by poking around the brain to extract a computable "system"--the systems-level description gives us an impressive set of functions and structures that can be written down and discussed, but it's essentially an inert layer sitting on top of a black box. Again, it's what's inside the box that we want, however many notes we scribble on its exterior. There's a more precise way of stating my point here, to which we now turn. 
     Enter Michael Polanyi. The chemist and philosopher was known for political theorizing, scientific research (chemistry), and large, difficult treatises arguing against mechanical conceptions of science. In my view, he should also be known as supplying the key insights that shed light on why progress unlocking intelligence (or consciousness) with systems engineering is bound to fall short. Polanyi recognized that a lot of what makes us intelligent is what he called tacit knowledge, or knowledge that we know but can't express in language (including computer language). Human skills, he said, can't be captured by maxims or rules (though such rules may guide and assist someone who already possesses a skill). 
     It's a simple observation. Take riding a bike. If we try to describe how to ride a bike by providing some set of rules to follow, we'll have to buy extra band-aids, because no one can know how to ride a bike by following a set of rules. Same goes for other skills, like swimming, or (more interestingly) discovering new theories in science. There's no rule book. 
     What's interesting about this apparently straightforward and benign observation is it's deflating effect on so much of the current systems enthusiasm for explaining ourselves. Even if we are following some path through an elaborate (say hierarchical) system, we can't articulate this system into a symbolic representation without losing some of the magic sauce that lies at the core of intelligence. It's not that it's mystical, or non-physical necessarily (one need not be a Cartesian dualist), it's that it's not amenable to articulation, whatever it is. Yet, what else concerns the modern neuroscience or AI proponent, than an incessant, near obsessive desire to capture the system--to write down the set of rules, in other words. Polanyi's seemingly innocuous point about the non-articulable aspects of human thinking casts suspicion on such a project from the start. Intelligence, in other words, isn't something we can describe at the level of a (computable) system--a symbolic representation--any more than is something like consciousness. It suggests particular system descriptions perhaps (if we're lucky), but is not itself captured by them. 
     But if Polanyi is right, then the entire project of articulating a systems-level account of intelligence, such that some other rule-following artifact like a computer can reproduce it, is doomed. Again, if we use tacit (not symbolically representable) knowledge to act intelligently in our day to day lives--from riding a bike to discovering a cure for malaria--then attempts to symbolically represent intelligence will always leave something out. It's as if we have a beautiful car, with shiny wheels, brilliant paint, tinted windows... and no engine. The "engine" stays in the brain, frustratingly, while we articulate the complex systems surrounding it. To paraphrase philosopher Jerry Fodor, in his 2001 The Mind Doesn't Work That Way: it's no wonder our robots still don't work. If Polanyi is right, there's a good reason why. 
     So, like the consciousness tales told by very intelligent people, but still not signifying what we wish, the quest for Artificial Intelligence rages on and on, with no smart robots in sight. We're up against a fundamental limitation.  Well, so what?  We needn't despair of ever explaining ourselves somehow, or take refuge in glib techno-futurist predictions, divorced from reality (Futurist Ray Kurzweil has famously predicted that computers will completely reproduce minds by 2029, to take one particularly bombastic example of our capacity to remain impervious to deeper issues in AI.)  In fact seemingly deflating conclusions--if true--can often lead to better ideas tomorrow. It's not troubling or threatening to figure out a real limitation--it's progress. 
     Consider the history of science.  For instance, take the famous mathematical logician Kurt Godel.  Godel proved in 1931 his famous Incompleteness Theorems, putting to rest a long time dream of mathematicians to reduce mathematics to logic. Godel showed that any formal system complex enough to be interesting (including the Peano axioms for doing addition, basically) would have fundamental limitations--they can't be both consistent and complete. This means that mathematical thinking lies outside the scope of logical proof (and hence computation), no matter how complex the logical formalism one uses. Yet, far from shutting down further research in mathematics, it arguably paved a path to modern computation; Turing published his Halting Problem results based on Godel's result, and later provided the theoretical model (the Turing Machine) for universal computing machines. Not bad for a limiting result. 
     One might make similar observations about, say, Heisenberg's Uncertainty Principle, where either the position or the momentum of a photon can be measured, but not both. Again, a limitation. And again, an active area of research (Quantum Mechanics). So the question, to me, is not whether we're being psychologically depressing or retrograde if we acknowledge the core problems facing us unlocking the mysteries of thinking (as Chalmers once said about consciousness, you first have to feel the problem "in your bones"). A patina of impressive, neuroscience-informed systems theories may generate lots of journal publications, but the proof of a system is in its success, and I think we shouldn't be too sanguine about  successes given a serious appraisal of the challenges we face. Yet the history of science suggests that our limitations, once acknowledged, may in fact prove vastly more productive in the long run than continuing to make the same errors. We may, in other words, simply be on the wrong path. That's not a limitation, it's knowledge.

1 comment:

Anonymous said...

kids grow up fast, and so thought also follows which usually they easily grow
out of garments. This particular changeover from at least two to three is
usually the age in mastery.

Feel free to surf to my website ... zespół muzyczny Toruń