When I was a Ph.D. student at Arizona in the late 1990s, many of the philosophic and scientific rock stars would gather at the interdisciplinary Center for Consciousness Studies and discuss the latest theories on consciousness. While DNA co-discoverer Francis Crick declared in his The Astonishing Hypothesis that "a person's mental activities are entirely due to the behavior of nerve cells, glial cells, and the atoms, ions, and molecules that make them up and influence them", scientists like Kristof Koch of CalTech pursued specific research on "the neural correlates of consciousness" in the visual system, and Stuart Hameroff, along with physicist Roger Penrose, searched for the roots of consciousness in still more fundamental levels, in the quantum effects of the so-called microtubules in our brains. Philosophers dutifully provided the conceptual background, from Paul and Patricia Churchland's philosophic defense of Crick's hypothesis in eliminativism--the view that there is no problem of conciousness because "consciousness" isn't a real scientific object of study (it's like an illusion we create for ourselves, with no physical reality)--to David Chalmers' defense of property dualism, to the "mysterians" like Colin McGill, who suggested that consciousness was beyond understanding. We are "cognitively closed" to certain explanations, says McGill, like a dog trying to understand Newtonian mechanics.
Yet with all the great minds gathered together, solutions to the problem of consciousness were in short supply. In fact the issues, to me anyway, seemed to become larger, thornier, and more puzzling than ever. I left Arizona and returned to the University of Texas at Austin in 1999 to finish my master's degree (I was a visiting grad student at Arizona), and I didn't think much more about consciousness--with all those smart people drawing blanks, contradicting each other, and arguing over basics like how even to frame the debate, who needed me? (Ten years later I checked in with the consciousness debate to discover, as I suspected, that the issues on the table were largely unchanged. As Fodor once put it, the beauty of philosophical problems is that you can leave them sit for decades, and return without missing a step.) My interest anyway was in human intelligence, not consciousness.
Intelligence. Unlocking the mystery of how we think was supposed to be a no brainer (no pun). Since Turing gave us a model of computation in the 1940s, researchers in AI began proclaiming that non-biological intelligence was imminent. Fifty years ago Nobel laureate and AI expert Herbert Simon won the prestigious A.M. Turing Award as well as the National Medal of Science. That didn't stop him from declaring in 1957 that "there are now in the world machines that think, that learn and that create", and later to prognosticate in 1965 that: "[By 1985], machines will be capable of doing any work Man can do." In 1967, AI luminary Marvin Minsky of MIT predicted that 'within a generation, the problem of creating "artificial intelligence" will be substantially solved'.
Of course, none of this happened. Even stalwart visionaries have their limits, and so today you'll hear nary a whimper about the once crackling project of programming a computer to think like a person. Something was missed. Today, the neuroscience model is all the rage. Apparently what was wrong with "GOFAI" or "Good Old Fashioned AI" was precisely what was once touted as its strength--we can't ignore how our brains work in understanding intelligence, because brains are the only things that we know can. Programming intelligent robots from armchair principles and theories, well this was bound to fail.
Enter Modern AI, which has essentially two active programs. On the one hand, as I've mentioned, we want to reverse-engineer intelligence in artificial artifacts by studying the brain. On the other, we can view intelligence as an emergent phenomenon of complex systems. Since the actual "stuff" of the complex system isn't the point, but rather its complexity, in principle we might see intelligence emerge from something like the World Wide Web. I'm not sure how seriously to take the latter camp, as it seems a little too glib to take our latest technological innovations and (again and again, in the history of technology) declare them the next candidate for AI. Let's set this aside for now. But the former camp has a certain plausibility to it, and in the wake of the failure of traditional AI, what else do we have? It's the brain, stupid.
Well, it is, but it's not. Mirroring the consciousness conundrums, the quest for AI now anchored in brain research appears destined to the same hodge podge, Hail Mary! type theorizing and prognosticating as consciousness studies. There's a reason for this, I think, which is again prefigured in Leibniz's pesky comment. To unpack it all, let's look at some theories popular today.
Take Jeff Hawkins. Famous for developing the Palm Pilot and an all around smart guy in Silicon Valley, Hawkins dipped his toe into the AI waters in 2004 with the publication of his On Intelligence, a bold and original attempt to summarize the volumes of neuroscience data about thinking in the Neo Cortex with a hierarchical model of intelligence. The Neo Cortex, Hawkins argues, takes input from our senses and "decodes" it in hierarchical layers, with each higher layer making predictions from the data provided by a lower, until we reach the top of the hierarchy and some overall predictive theory is synthesized from the output of the lower layers. His theory makes sense of some empirical data, such as differences in our responses based on different types of input we receive. For "easier" predictive problems, the propagation up the Neo Cortex hierarchy terminates sooner (we've got the answer), and for tougher problems, the cortex keeps processing and passing the neural input up to higher, more powerful and globally sensitive layers. The solution is then made available or passed back to lower layers until we have a coherent prediction based on the original input.
Hawkins has an impressive grasp of neuroscience, and he's an expert at using his own innovative brain to synthesize lots of data into a coherent picture of human thinking. Few would disagree that the Neo Cortex is central to any understanding of human cognition, and intuitively (at least to me) his hierarchical model explains why we sometimes pause to "process" more of the input we're receiving in our environments before we have a picture of things--a prediction, as he says. He cites the commonsense case of returning to your home and having the doorknob moved a few inches to the left (or right). The prediction has been coded into lower levels because benefit of prior experience has made it rote, we open the door again and again, and the knob is always in some one place. So when it's moved slightly, Hawkins claims, the rote prediction fails, and the cortex sends the visual and tactile data further up the hierarchy, until the brain gives us a new prediction (which in turn will spark other, more "global" thinking as we search for an explanation, and so on).
Ok. I'll buy it as far as it goes, but the issue with Hawkins, like with the consciousness debates, is that it doesn't really "go". How exactly are we intelligent, again? For all the machinery of a hierarchical model of intelligence, "intelligence" itself remains largely untouched. Offering that it's "hierarchical" and "located in the Neo Cortex" is hardly something we can reverse engineer, any more than we can explain the taste of a fine red wine by pointing to quantum events in microtubules. "So what?" one might say, without fear of missing the point. To put it another way, we don't want a brain-inspired systems-level description of the black box of human intelligence--how we see what is relevant in complex, dynamic environments--we want a systems description of how the box itself works. What's in the box? That's the theory we need, but it's not what we get from Hawkins, however elaborate the hierarchical vision of the cortex might be.
To prove my point, we might put "our money where our mouth is" as they say, and take his entire theory and code it up in a software system that reproduces exactly the connections he specifies. What will happen? Well, the smart money is "not much", because such systems have in fact been around for decades in computer science (hierarchical models of inference, etc.), and we already know that the details at the level of systems don't provide the underlying juice--whatever it is--to actually reproduce thinking. If they did, the millions of lines of code generated from every imaginable conceptualization of intelligent machine thinking would have hit on it. Something more is going on than systems thinking. (As it turns out, Hawkins himself has largely proved my point. He launched a software company providing predictions of abnormal network events for security purposes using computational models inspired by his research on the Neo Cortex. Have you heard of the company? Me neither, until I read his web page. Not to be cruel, but if he decoded human intelligence in a programmable way, the NASDAQ would have told us by now.)
I'm not picking on Hawkins. Let's take another popular account, this time from a practicing neuroscientist, all around smart guy David Eagleman. Eagleman argues in his 2012 Incognito that the brain is a "team of rivals", a theory that mirrors AI researcher Marvin Minsky's agents-based approach to reproducing human thought in his Society of Mind (1986), and later The Emotion Machine (2006). The brain reasons and thinks, claims Eagleman, by proposing different interpretations of sense data from our environment, and through refinement and checking against available evidence and pre-existing beliefs, allowing the "best" or "winning" interpretation to win out. Different systems in the brain provide different pictures of reality, and the competition among these systems yields stable theories or predictions at the level of our conscious beliefs and thoughts.
This is a quick pass through Eagleman's ideas on intelligence, but even if I were to dedicate several more paragraphs of explanation, I hope the reader can see the same problem up ahead. One is reminded of Daniel Dennett's famous quip in his Cognitive Wheels article about artificial intelligence, where he likens AI to explanations of a magic trick:
"It is rather as if philosophers were to proclaim themselves expert explainers of the methods of a stage magician, and then, when we ask them to explain how the magician does the sawing-the-lady-in-half-trick, they explain that it is really quite obvious: the magician doesn't really saw her in half; he simply makes it appear that he does. 'But how does he do that?' we ask. 'Not our department', say the philosophers--and some of them add, sonorously: 'Explanation has to stop somewhere.'
But the "team of rivals" explanation has stopped, once again, before we've gotten anywhere meaningful. Of course the brain may be "like this" or may be "like that" (insert a system or model); we're searching for what makes the systems description work as a theory of intelligence in the first place. "But how?" we keep asking (echoing Dennett). Silence.
1 comment:
kids grow up fast, and so thought also follows which usually they easily grow
out of garments. This particular changeover from at least two to three is
usually the age in mastery.
Feel free to surf to my website ... zespół muzyczny Toruń
Post a Comment