Popular Posts

Wednesday, May 8, 2013

Coming to a Town Near You: Singularitarians, Transhumanists, and Smart Robots

     If you're lucky enough to work in a software start up in a bastion of innovation like Palo Alto, you'll have a front row seat watching young 20 somethings with oodles of technical talent writing tomorrow's killer apps, talking about the latest tech news (everyone is in the know), and generally mapping out a techno vision of the future.  It's exciting stuff.  Walk down University Ave and take it all in; it doesn't matter much which bistro or restaurant you wander into, you'll hear the same excited patter of future talk--the next "New New Thing" as writer Michael Lewis put it.  The techno-ethos of Palo Alto is of course understandable, as hundreds of millions in venture capital flow into start ups each year, making millionaires of kids barely out of school, and changing the nature of business and everyday life for the rest of us.  It's an exciting place.  Yet, for all the benefits and sheer exhilaration of innovation, if you stick around long enough, you'll catch some oddly serious discussions about seemingly sillier topics.  While there are plenty of sceptics and agnostics, lots of technical types are drawn to "Sci Fi" versions of the future.  And some of them, for whatever reason, seem to think they can predict it.

What's next, "big picture"?  Ask Google's founders, to take a notable example.  In a 2004 Newsweek interview, Sergei Brin ruminated:

"I think we're pretty far along compared to 10 years ago," he says. "At the same time, where can you go? Certainly if you had all the world's information directly attached to your brain, or an artificial brain that was smarter than your brain, you'd be better off. Between that and today, there's plenty of space to cover."

And it's not just Brin.  Google technology director Craig Silverstone chimed in (in the same article):  "The ultimate goal is to have a computer that has the kind of semantic knowledge that a reference librarian has". 

Really?  From the Google intelligencia, no less.  But this is part of the culture in Silicon Valley, and all over the world, it's the engineers, computer scientists, and entrepreneurs who seem obsessed with the idea of reverse engineering our brains to create artificial versions.  If you're an engineer immersed in the project of making better, "smarter" software all day, it's an understandable vision, even noble, by "geek" standards.  But cerebral types have been trumpeting the imminent arrival of Artificial Intelligence for decades, almost since Alan Turing gave us the original theoretical spec for a universal computing machine, in 1936.  

Well, as a member of the "geek squad" myself, I've been following the debates for years, since back in graduate school at Texas and Arizona, where debates about the nature of the human mind, and the differences between humans and machines are a commonplace.  Not much has changed--fundamentally--since those years (as far as I can tell), and the question of whether a machine can reproduce a mind is still largely unanswered.  But the world of technology has changed, quite radically, with the development and widespread adoption of the Web.  Perhaps our software isn't "human smart", but impressive technology is everywhere these days, and it seems to grow further and further into every corner of our lives, almost daily.  The notion, then, that our minds might end up in silicon-based systems is perhaps not that impossibly far fetched.  

In fact the explosion of Web technology is probably most to credit (or blame) for the latest version of a Sci Fi future.  If you dare browse through all the "isms" that have sprung up out of this cornucopia of digitization, you'll likely find yourself wishing Lonely Planet published a tourists guide for would-be futurists.  Failing that, let's take a look at a Cliff Notes version, next.

The Isms

As far as I can tell, there are three main tenets to the Sci Fi Future involving superintelligent, artificial beings.  One, we have Singularitarianism (no this isn't misspelled).  Entrepreneurs like Ray Kurzweil have popularized the neologism, in books like The Age of Spiritual Machines (1999), The Singularity is Near (2005), and the most recent How to Create a Mind: The Secrets of Human Thought Revealed (2011).  The "singularity" as the name suggests, is the future point at which human or biological and machine or non-biological intelligence merges, creating a super intelligence that is no longer constrained by the limits of our physical bodies.  At "the singularity", we can download our brains onto a better hardware, and create a future world where we never have to get old and die, or get injured (we can have titanium bodies).  Plus, we'll be super smart, just like Brin suggests.  When we need some information about something, we'll just, well, "think", and the information will come to our computer-enhanced brains.  

If this sounds incredible, you're not alone.  But Singularitarians insist that the intelligence of computers is increasingly exponentially, and that as highfalutin as this vision might seem, the laws of exponential growth make it not only plausible but imminent.  Kurzweil famously predicted that the "s-spot", the  singularity--where machines outstrip the intelligence of humans--would occur by 2029 in his earlier works; by 2005 he had revised this to 2045.  Right up ahead.  (His predictions are predictably precise; understandably, they also tend to get revised to more distant futures as reality marches on.)  And Carnegie Mellon robotics expert Hanz Moravec agrees, citing evidence from Moore's Law--the generally accepted observation that computing capacity on integrated circuits is doubling roughly every eighteen months--that a coming "mind fire" will replace human intelligence with a "superintelligence" vastly outstripping mere mortals.  Moravec's prediction?  Eerily on par with Kurzweil, in his 1998 Robot:  Mere Machine to Transcendent Mind, he sees machines achieving human levels of intelligence by 2040, and surpassing our biologically flawed hardware and software by 2050.

Well, if all of this singularity talk creeps you out, don't worry.  There are tamer visions of the future from the geek squad, like transhumanism.  Transhumanists (many of whom share the millennial raptures of Singularitarians) seek an extension of our current cognitive powers by the fusion of machine and human intelligence.  Smarter human brains, from the development of smart drugs, artificial brain implants for enhanced memory or cognitive functions, and even "nanobots"--microscopic robots let loose in our brains to map out and enhance our neural activities--promise to evolve our species from the boring, latte drinking Humans 1.0 to the 2.0 machine-fused versions, where, as Brin suggests, we can "have the world's information attached to our brains."  (Sweet!)

Enter True AI

Singularitarians.  Transhumanists.  They're all all bearish on mere humanity, it seems.  But there's another common thread besides the disdain for mere flesh and blood , which makes the futurists' "isms" distinctions one without a substantive difference, because whether your transhuman future includes a singularity, or a mere perpetual, incremental enhancement (which, arguably, we've been doing with our technology since pre-history), you're into Artificial Intelligence, smart robots.

After all, who would fuse themselves with a shovel, or a toaster?  It's the promise of artificial intelligence that infuses techno-futurists prognostications with hope for tomorrow.  And while the history of AI suggests deeper and thornier issues beguile the engineering of truly intelligent machines, the exponential explosion of computing power and speed, along with the miniaturization of nearly everything, make the world of smart robots seem plausible (again), at least to the "isms" crowd.  As co-founder of Wired magazine and techno-futurist Kevin Kelly remarks in his 2010 What Technology Wants, we are witnessing the "intelligenization" of nearly everything.  Everywhere we look "smart technologies" are enhancing our driving experiences, our ability to navigate with GPS, to find what we want, to shop, bank, socialize, you name it.  Computers are embedded in our clothing now, or in our eye wear (you can wear a prototype version of the computer-embedded Google Glass these days, if you're one of the select few chosen).  Intelligenization, everywhere.

Or, not.  Computers are getting faster and more useful, no doubt, but are they really getting smarter, like humans?  That's a question for neuroscience, to which we now turn.

The Verdict from Neuroscience?  Don't Ask
  
One peculiarity with the current theorizing among the technology "nerds", focused as they are on the possibilities of unlocking the neural "software" in our brains to use as blueprints for machine smarts, is the rather lackluster or even hostile reception their ideas receive from the people ostensibly most in the know about "intelligence" and its prospects or challenges--the brain scientists.  Scientists like Nobel laureate and director of the Neurosciences Institute in San Diego Gerald Edelman, for example.  Edelman is notably sceptical, almost sarcastic, when he's asked questions about the prospects of reverse engineering the brain in software systems.  "This is a wonderful project--that we're going to have a spiritual bar mitzvah in some galaxy,” Edelman says of the singularity. "But it's a very unlikely idea.”  Bummer.  (In California parlance:  "dude, you're dragging us down").

And Edelman is not alone in voicing skepticism of  what sci fi writer Ken MacLeod calls "rapture for nerds".    In fact, almost in proportion to the enthusiasm among the "machine types"--the engineers and entrepreneurs like Google's Brin, and countless others in the slick office spaces adorning high tech places like Silicon Valley--the "brain types" seem to pour cold water.  Wolf Singer of the Max Planck Institute for Brain Research in Frankfurt, Germany, is best known for his "oscillations" proposal, where he theorizes that patterns in the firing of neurons are linked, perhaps, to cognition.  Singer's research inspired no less than Francis Crick, co discoverer of DNA, and Caltech neuroscience star Kristof Koch to propose that "40 hz occillations" play a central role in forming our conscious experiences.  

Yet, he's notably nonplussed about the futurists' prognostications about artificial minds.  As former Scientific American writer John Horgan notes in his IEEE Spectrum article, The Consciousness Conundrum:  "Given our ignorance about the brain, Singer calls the idea of an imminent singularity [achieving true AI] 'science fiction'."  Koch agrees.  Comparing his work with Crick--decoding DNA--to the project of understanding the "neural code" for purposes of engineering a mind, he muses: "It is very unlikely that the neural code will be anything as simple and as universal as the genetic code.”  What gives?

It's hard to say.  As always, the future of predicting the future is uncertain.  One thing seems probable, however.  The core mysteries of life, like conscious experience and intelligence, will continue to beguile and humble us, with a greater appreciation for its complexity and beauty.  And, predictably, what has been called "Level 1" technologies, or "shop floor" technologies that we employ to achieve specific goals--like traveling from A to B quickly (an airplane), or digging a ditch (a shovel) or searching millions of electronic web pages (a search engine) will continue to get more powerful and complex.  What is less predictable  it seems, is whether all these enhancement projects will really unlock anything special, beyond the digitization of our everyday experiences in zillions of gadgets and tools.  Indeed, whether all these gadgets and tools really are getting "smarter" or just faster, smaller, and more ubiquitous in our lives, is itself an open question, properly understood.  In the complicated connections between technologies and the broader social, political, and cultural contexts within which they exist, almost any future seems possible.   As Allenby and Sarewitz note in their 2012 critique of transhumanism, The Techno-Human Condition, the real world is always a struggle to define values, and contra the technology-centered types like Kurzweil or Moravec, it gets more and more complicated, and harder--not easier--to predict.  Technology, in other words, makes things murkier for futurists.  And real science--real thinking--, ideally, can provide some balance.  We'll see.

Back in Silicon Valley, things don't seem so philosophically confusing.  The future, as always, seems perpetually wide open to more and better, which lockstep-like seems also certain to equal better outcomes for us, too.  But the sobering news, as the frontiers of neuroscience report, is that the "big questions" are unanswered still today, and answering them seems a long way away to boot.  I'm not a betting person, but however the world appears in 2045 (or was it 2029?), it's safe to say we don't know yet.  In the meantime, the all-too-human tendency to see nails everywhere with each new version of a hammer is likely to continue, unabated.  Well, so what?  Perhaps the Google founders and their legions of programmers have earned their right to prognosticate.  We humans can smile and shrug, and wait and see.  We're all just human, after all.




Saturday, May 4, 2013

Zen and the Art of Staring at Brains

     Neuroscience is exciting, and frustrating, business for practitioners of Artificial Intelligence (AI), and other fields like cognitive science dedicated to reverse engineering the human mind by studying the brain. Leibniz anticipated much of the modern debate centuries ago, when he remarked that if we could shrink ourselves to microscopic size, and "walk around" inside the brain, we would never discover a hint of our conscious experiences. We can't see consciousness, even if we look at the brain. The problem of consciousness, as it is known to philosophers, is a thorny problem that unfortunately has not yielded its secrets even as techniques for studying the brain in action have proliferated in the sciences. Magnetic Resonance Imaging(MRI), functional MRIs (fMRI), and other technologies give us detailed maps of brain activity that have proven enormously helpful in diagnosing and treating a range of brain-related maladies, from addiction to head trauma to psychological disorders. Yet, for all the sophistication of modern science, Leibniz's remarks remain prescient. If the brain is the seat of consciousness, why can't we explain it in terms of the brain?
    When I was a Ph.D. student at Arizona in the late 1990s, many of the philosophic and scientific rock stars would gather at the interdisciplinary Center for Consciousness Studies and discuss the latest theories on consciousness. While DNA co-discoverer Francis Crick declared in his The Astonishing Hypothesis that "a person's mental activities are entirely due to the behavior of nerve cells, glial cells, and the atoms, ions, and molecules that make them up and influence them", scientists like Kristof Koch of CalTech pursued specific research on "the neural correlates of consciousness" in the visual system, and Stuart Hameroff, along with physicist Roger Penrose, searched for the roots of consciousness in still more fundamental levels, in the quantum effects of the so-called microtubules in our brains. Philosophers dutifully provided the conceptual background, from Paul and Patricia Churchland's philosophic defense of Crick's hypothesis in eliminativism--the view that there is no problem of conciousness because "consciousness" isn't a real scientific object of study (it's like an illusion we create for ourselves, with no physical reality)--to David Chalmers' defense of property dualism, to the "mysterians" like Colin McGill, who suggested that consciousness was beyond understanding. We are "cognitively closed" to certain explanations, says McGill, like a dog trying to understand Newtonian mechanics.
     Yet with all the great minds gathered together, solutions to the problem of consciousness were in short supply.  In fact the issues, to me anyway, seemed to become larger, thornier, and more puzzling than ever. I left Arizona and returned to the University of Texas at Austin in 1999 to finish my master's degree (I was a visiting grad student at Arizona), and I didn't think much more about consciousness--with all those smart people drawing blanks, contradicting each other, and arguing over basics like how even to frame the debate, who needed me? (Ten years later I checked in with the consciousness debate to discover, as I suspected, that the issues on the table were largely unchanged. As Fodor once put it, the beauty of philosophical problems is that you can leave them sit for decades, and return without missing a step.) My interest anyway was in human intelligence, not consciousness.
     Intelligence. Unlocking the mystery of how we think was supposed to be a no brainer (no pun). Since Turing gave us a model of computation in the 1940s, researchers in AI began proclaiming that non-biological intelligence was imminent. Fifty years ago Nobel laureate and AI expert Herbert Simon won the prestigious A.M. Turing Award as well as the National Medal of Science. That didn't stop him from declaring in 1957 that "there are now in the world machines that think, that learn and that create", and later to prognosticate in 1965 that: "[By 1985], machines will be capable of doing any work Man can do." In 1967, AI luminary Marvin Minsky of MIT predicted that 'within a generation, the problem of creating "artificial intelligence" will be substantially solved'.
     Of course, none of this happened. Even stalwart visionaries have their limits, and so today you'll hear nary a whimper about the once crackling project of programming a computer to think like a person. Something was missed. Today, the neuroscience model is all the rage. Apparently what was wrong with "GOFAI" or "Good Old Fashioned AI" was precisely what was once touted as its strength--we can't ignore how our brains work in understanding intelligence, because brains are the only things that we know can. Programming intelligent robots from armchair principles and theories, well this was bound to fail.
     Enter Modern AI, which has essentially two active programs. On the one hand, as I've mentioned, we want to reverse-engineer intelligence in artificial artifacts by studying the brain. On the other, we can view intelligence as an emergent phenomenon of complex systems. Since the actual "stuff" of the complex system isn't the point, but rather its complexity, in principle we might see intelligence emerge from something like the World Wide Web. I'm not sure how seriously to take the latter camp, as it seems a little too glib to take our latest technological innovations and (again and again, in the history of technology) declare them the next candidate for AI. Let's set this aside for now. But the former camp has a certain plausibility to it, and in the wake of the failure of traditional AI, what else do we have? It's the brain, stupid. 
    Well, it is, but it's not. Mirroring the consciousness conundrums, the quest for AI now anchored in brain research appears destined to the same hodge podge, Hail Mary! type theorizing and prognosticating as consciousness studies. There's a reason for this, I think, which is again prefigured in Leibniz's pesky comment. To unpack it all, let's look at some theories popular today.
     Take Jeff Hawkins. Famous for developing the Palm Pilot and an all around smart guy in Silicon Valley, Hawkins dipped his toe into the AI waters in 2004 with the publication of his On Intelligence, a bold and original attempt to summarize the volumes of neuroscience data about thinking in the Neo Cortex with a hierarchical model of intelligence. The Neo Cortex, Hawkins argues, takes input from our senses and "decodes" it in hierarchical layers, with each higher layer making predictions from the data provided by a lower, until we reach the top of the hierarchy and some overall predictive theory is synthesized from the output of the lower layers. His theory makes sense of some empirical data, such as differences in our responses based on different types of input we receive. For "easier" predictive problems, the propagation up the Neo Cortex hierarchy terminates sooner (we've got the answer), and for tougher problems, the cortex keeps processing and passing the neural input up to higher, more powerful and globally sensitive layers. The solution is then made available or passed back to lower layers until we have a coherent prediction based on the original input.
     Hawkins has an impressive grasp of neuroscience, and he's an expert at using his own innovative brain to synthesize lots of data into a coherent picture of human thinking. Few would disagree that the Neo Cortex is central to any understanding of human cognition, and intuitively (at least to me) his hierarchical model explains why we sometimes pause to "process" more of the input we're receiving in our environments before we have a picture of things--a prediction, as he says. He cites the commonsense case of returning to your home and having the doorknob moved a few inches to the left (or right). The prediction has been coded into lower levels because benefit of prior experience has made it rote, we open the door again and again, and the knob is always in some one place. So when it's moved slightly, Hawkins claims, the rote prediction fails, and the cortex sends the visual and tactile data further up the hierarchy, until the brain gives us a new prediction (which in turn will spark other, more "global" thinking as we search for an explanation, and so on).
     Ok. I'll buy it as far as it goes, but the issue with Hawkins, like with the consciousness debates, is that it doesn't really "go". How exactly are we intelligent, again? For all the machinery of a hierarchical model of intelligence, "intelligence" itself remains largely untouched. Offering that it's "hierarchical" and "located in the Neo Cortex" is hardly something we can reverse engineer, any more than we can explain the taste of a fine red wine by pointing to quantum events in microtubules. "So what?" one might say, without fear of missing the point. To put it another way, we don't want a brain-inspired systems-level description of the black box of human intelligence--how we see what is relevant in complex, dynamic environments--we want a systems description of how the box itself works. What's in the box?  That's the theory we need, but it's not what we get from Hawkins, however elaborate the hierarchical vision of the cortex might be.
     To prove my point, we might put "our money where our mouth is" as they say, and take his entire theory and code it up in a software system that reproduces exactly the connections he specifies. What will happen? Well, the smart money is "not much", because such systems have in fact been around for decades in computer science (hierarchical models of inference, etc.), and we already know that the details at the level of systems don't provide the underlying juice--whatever it is--to actually reproduce thinking. If they did, the millions of lines of code generated from every imaginable conceptualization of intelligent machine thinking would have hit on it. Something more is going on than systems thinking. (As it turns out, Hawkins himself has largely proved my point. He launched a software company providing predictions of abnormal network events for security purposes using computational models inspired by his research on the Neo Cortex. Have you heard of the company? Me neither, until I read his web page. Not to be cruel, but if he decoded human intelligence in a programmable way, the NASDAQ would have told us by now.)
     I'm not picking on Hawkins. Let's take another popular account, this time from a practicing neuroscientist, all around smart guy David Eagleman. Eagleman argues in his 2012 Incognito that the brain is a "team of rivals", a theory that mirrors AI researcher Marvin Minsky's agents-based approach to reproducing human thought in his Society of Mind (1986), and later The Emotion Machine (2006). The brain reasons and thinks, claims Eagleman, by proposing different interpretations of sense data from our environment, and through refinement and checking against available evidence and pre-existing beliefs, allowing the "best" or "winning" interpretation to win out. Different systems in the brain provide different pictures of reality, and the competition among these systems yields stable theories or predictions at the level of our conscious beliefs and thoughts.
    This is a quick pass through Eagleman's ideas on intelligence, but even if I were to dedicate several more paragraphs of explanation, I hope the reader can see the same problem up ahead. One is reminded of Daniel Dennett's famous quip in his Cognitive Wheels article about artificial intelligence, where he likens AI to explanations of a magic trick:
                 "It is rather as if philosophers were to proclaim themselves expert explainers of the methods of a stage magician, and then, when we ask them to explain how the magician does the sawing-the-lady-in-half-trick, they explain that it is really quite obvious:  the magician doesn't really saw her in half; he simply makes it appear that he does.  'But how does he do that?' we ask.  'Not our department', say the philosophers--and some of them add, sonorously:  'Explanation has to stop somewhere.'

But the "team of rivals" explanation has stopped, once again, before we've gotten anywhere meaningful.  Of course the brain may be "like this" or may be "like that" (insert a system or model); we're searching for what makes the systems description work as a theory of intelligence in the first place.  "But how?" we keep asking (echoing Dennett).  Silence.

  Well, not to pick on Eagleman, either. I confess that I thoroughly enjoyed his book, at least right up to the point where he tackles human intelligence. It's not his fault. If someone is looking at the brain to unlock the mystery of the mind, the specter of Leibniz is sure to haunt them, no matter how smart or well informed. The issue with human intelligence is not the sort of thing that can be illuminated by poking around the brain to extract a computable "system"--the systems-level description gives us an impressive set of functions and structures that can be written down and discussed, but it's essentially an inert layer sitting on top of a black box. Again, it's what's inside the box that we want, however many notes we scribble on its exterior. There's a more precise way of stating my point here, to which we now turn. 
     Enter Michael Polanyi. The chemist and philosopher was known for political theorizing, scientific research (chemistry), and large, difficult treatises arguing against mechanical conceptions of science. In my view, he should also be known as supplying the key insights that shed light on why progress unlocking intelligence (or consciousness) with systems engineering is bound to fall short. Polanyi recognized that a lot of what makes us intelligent is what he called tacit knowledge, or knowledge that we know but can't express in language (including computer language). Human skills, he said, can't be captured by maxims or rules (though such rules may guide and assist someone who already possesses a skill). 
     It's a simple observation. Take riding a bike. If we try to describe how to ride a bike by providing some set of rules to follow, we'll have to buy extra band-aids, because no one can know how to ride a bike by following a set of rules. Same goes for other skills, like swimming, or (more interestingly) discovering new theories in science. There's no rule book. 
     What's interesting about this apparently straightforward and benign observation is it's deflating effect on so much of the current systems enthusiasm for explaining ourselves. Even if we are following some path through an elaborate (say hierarchical) system, we can't articulate this system into a symbolic representation without losing some of the magic sauce that lies at the core of intelligence. It's not that it's mystical, or non-physical necessarily (one need not be a Cartesian dualist), it's that it's not amenable to articulation, whatever it is. Yet, what else concerns the modern neuroscience or AI proponent, than an incessant, near obsessive desire to capture the system--to write down the set of rules, in other words. Polanyi's seemingly innocuous point about the non-articulable aspects of human thinking casts suspicion on such a project from the start. Intelligence, in other words, isn't something we can describe at the level of a (computable) system--a symbolic representation--any more than is something like consciousness. It suggests particular system descriptions perhaps (if we're lucky), but is not itself captured by them. 
     But if Polanyi is right, then the entire project of articulating a systems-level account of intelligence, such that some other rule-following artifact like a computer can reproduce it, is doomed. Again, if we use tacit (not symbolically representable) knowledge to act intelligently in our day to day lives--from riding a bike to discovering a cure for malaria--then attempts to symbolically represent intelligence will always leave something out. It's as if we have a beautiful car, with shiny wheels, brilliant paint, tinted windows... and no engine. The "engine" stays in the brain, frustratingly, while we articulate the complex systems surrounding it. To paraphrase philosopher Jerry Fodor, in his 2001 The Mind Doesn't Work That Way: it's no wonder our robots still don't work. If Polanyi is right, there's a good reason why. 
     So, like the consciousness tales told by very intelligent people, but still not signifying what we wish, the quest for Artificial Intelligence rages on and on, with no smart robots in sight. We're up against a fundamental limitation.  Well, so what?  We needn't despair of ever explaining ourselves somehow, or take refuge in glib techno-futurist predictions, divorced from reality (Futurist Ray Kurzweil has famously predicted that computers will completely reproduce minds by 2029, to take one particularly bombastic example of our capacity to remain impervious to deeper issues in AI.)  In fact seemingly deflating conclusions--if true--can often lead to better ideas tomorrow. It's not troubling or threatening to figure out a real limitation--it's progress. 
     Consider the history of science.  For instance, take the famous mathematical logician Kurt Godel.  Godel proved in 1931 his famous Incompleteness Theorems, putting to rest a long time dream of mathematicians to reduce mathematics to logic. Godel showed that any formal system complex enough to be interesting (including the Peano axioms for doing addition, basically) would have fundamental limitations--they can't be both consistent and complete. This means that mathematical thinking lies outside the scope of logical proof (and hence computation), no matter how complex the logical formalism one uses. Yet, far from shutting down further research in mathematics, it arguably paved a path to modern computation; Turing published his Halting Problem results based on Godel's result, and later provided the theoretical model (the Turing Machine) for universal computing machines. Not bad for a limiting result. 
     One might make similar observations about, say, Heisenberg's Uncertainty Principle, where either the position or the momentum of a photon can be measured, but not both. Again, a limitation. And again, an active area of research (Quantum Mechanics). So the question, to me, is not whether we're being psychologically depressing or retrograde if we acknowledge the core problems facing us unlocking the mysteries of thinking (as Chalmers once said about consciousness, you first have to feel the problem "in your bones"). A patina of impressive, neuroscience-informed systems theories may generate lots of journal publications, but the proof of a system is in its success, and I think we shouldn't be too sanguine about  successes given a serious appraisal of the challenges we face. Yet the history of science suggests that our limitations, once acknowledged, may in fact prove vastly more productive in the long run than continuing to make the same errors. We may, in other words, simply be on the wrong path. That's not a limitation, it's knowledge.