Popular Posts

Saturday, May 4, 2013

Zen and the Art of Staring at Brains

     Neuroscience is exciting, and frustrating, business for practitioners of Artificial Intelligence (AI), and other fields like cognitive science dedicated to reverse engineering the human mind by studying the brain. Leibniz anticipated much of the modern debate centuries ago, when he remarked that if we could shrink ourselves to microscopic size, and "walk around" inside the brain, we would never discover a hint of our conscious experiences. We can't see consciousness, even if we look at the brain. The problem of consciousness, as it is known to philosophers, is a thorny problem that unfortunately has not yielded its secrets even as techniques for studying the brain in action have proliferated in the sciences. Magnetic Resonance Imaging(MRI), functional MRIs (fMRI), and other technologies give us detailed maps of brain activity that have proven enormously helpful in diagnosing and treating a range of brain-related maladies, from addiction to head trauma to psychological disorders. Yet, for all the sophistication of modern science, Leibniz's remarks remain prescient. If the brain is the seat of consciousness, why can't we explain it in terms of the brain?
    When I was a Ph.D. student at Arizona in the late 1990s, many of the philosophic and scientific rock stars would gather at the interdisciplinary Center for Consciousness Studies and discuss the latest theories on consciousness. While DNA co-discoverer Francis Crick declared in his The Astonishing Hypothesis that "a person's mental activities are entirely due to the behavior of nerve cells, glial cells, and the atoms, ions, and molecules that make them up and influence them", scientists like Kristof Koch of CalTech pursued specific research on "the neural correlates of consciousness" in the visual system, and Stuart Hameroff, along with physicist Roger Penrose, searched for the roots of consciousness in still more fundamental levels, in the quantum effects of the so-called microtubules in our brains. Philosophers dutifully provided the conceptual background, from Paul and Patricia Churchland's philosophic defense of Crick's hypothesis in eliminativism--the view that there is no problem of conciousness because "consciousness" isn't a real scientific object of study (it's like an illusion we create for ourselves, with no physical reality)--to David Chalmers' defense of property dualism, to the "mysterians" like Colin McGill, who suggested that consciousness was beyond understanding. We are "cognitively closed" to certain explanations, says McGill, like a dog trying to understand Newtonian mechanics.
     Yet with all the great minds gathered together, solutions to the problem of consciousness were in short supply.  In fact the issues, to me anyway, seemed to become larger, thornier, and more puzzling than ever. I left Arizona and returned to the University of Texas at Austin in 1999 to finish my master's degree (I was a visiting grad student at Arizona), and I didn't think much more about consciousness--with all those smart people drawing blanks, contradicting each other, and arguing over basics like how even to frame the debate, who needed me? (Ten years later I checked in with the consciousness debate to discover, as I suspected, that the issues on the table were largely unchanged. As Fodor once put it, the beauty of philosophical problems is that you can leave them sit for decades, and return without missing a step.) My interest anyway was in human intelligence, not consciousness.
     Intelligence. Unlocking the mystery of how we think was supposed to be a no brainer (no pun). Since Turing gave us a model of computation in the 1940s, researchers in AI began proclaiming that non-biological intelligence was imminent. Fifty years ago Nobel laureate and AI expert Herbert Simon won the prestigious A.M. Turing Award as well as the National Medal of Science. That didn't stop him from declaring in 1957 that "there are now in the world machines that think, that learn and that create", and later to prognosticate in 1965 that: "[By 1985], machines will be capable of doing any work Man can do." In 1967, AI luminary Marvin Minsky of MIT predicted that 'within a generation, the problem of creating "artificial intelligence" will be substantially solved'.
     Of course, none of this happened. Even stalwart visionaries have their limits, and so today you'll hear nary a whimper about the once crackling project of programming a computer to think like a person. Something was missed. Today, the neuroscience model is all the rage. Apparently what was wrong with "GOFAI" or "Good Old Fashioned AI" was precisely what was once touted as its strength--we can't ignore how our brains work in understanding intelligence, because brains are the only things that we know can. Programming intelligent robots from armchair principles and theories, well this was bound to fail.
     Enter Modern AI, which has essentially two active programs. On the one hand, as I've mentioned, we want to reverse-engineer intelligence in artificial artifacts by studying the brain. On the other, we can view intelligence as an emergent phenomenon of complex systems. Since the actual "stuff" of the complex system isn't the point, but rather its complexity, in principle we might see intelligence emerge from something like the World Wide Web. I'm not sure how seriously to take the latter camp, as it seems a little too glib to take our latest technological innovations and (again and again, in the history of technology) declare them the next candidate for AI. Let's set this aside for now. But the former camp has a certain plausibility to it, and in the wake of the failure of traditional AI, what else do we have? It's the brain, stupid. 
    Well, it is, but it's not. Mirroring the consciousness conundrums, the quest for AI now anchored in brain research appears destined to the same hodge podge, Hail Mary! type theorizing and prognosticating as consciousness studies. There's a reason for this, I think, which is again prefigured in Leibniz's pesky comment. To unpack it all, let's look at some theories popular today.
     Take Jeff Hawkins. Famous for developing the Palm Pilot and an all around smart guy in Silicon Valley, Hawkins dipped his toe into the AI waters in 2004 with the publication of his On Intelligence, a bold and original attempt to summarize the volumes of neuroscience data about thinking in the Neo Cortex with a hierarchical model of intelligence. The Neo Cortex, Hawkins argues, takes input from our senses and "decodes" it in hierarchical layers, with each higher layer making predictions from the data provided by a lower, until we reach the top of the hierarchy and some overall predictive theory is synthesized from the output of the lower layers. His theory makes sense of some empirical data, such as differences in our responses based on different types of input we receive. For "easier" predictive problems, the propagation up the Neo Cortex hierarchy terminates sooner (we've got the answer), and for tougher problems, the cortex keeps processing and passing the neural input up to higher, more powerful and globally sensitive layers. The solution is then made available or passed back to lower layers until we have a coherent prediction based on the original input.
     Hawkins has an impressive grasp of neuroscience, and he's an expert at using his own innovative brain to synthesize lots of data into a coherent picture of human thinking. Few would disagree that the Neo Cortex is central to any understanding of human cognition, and intuitively (at least to me) his hierarchical model explains why we sometimes pause to "process" more of the input we're receiving in our environments before we have a picture of things--a prediction, as he says. He cites the commonsense case of returning to your home and having the doorknob moved a few inches to the left (or right). The prediction has been coded into lower levels because benefit of prior experience has made it rote, we open the door again and again, and the knob is always in some one place. So when it's moved slightly, Hawkins claims, the rote prediction fails, and the cortex sends the visual and tactile data further up the hierarchy, until the brain gives us a new prediction (which in turn will spark other, more "global" thinking as we search for an explanation, and so on).
     Ok. I'll buy it as far as it goes, but the issue with Hawkins, like with the consciousness debates, is that it doesn't really "go". How exactly are we intelligent, again? For all the machinery of a hierarchical model of intelligence, "intelligence" itself remains largely untouched. Offering that it's "hierarchical" and "located in the Neo Cortex" is hardly something we can reverse engineer, any more than we can explain the taste of a fine red wine by pointing to quantum events in microtubules. "So what?" one might say, without fear of missing the point. To put it another way, we don't want a brain-inspired systems-level description of the black box of human intelligence--how we see what is relevant in complex, dynamic environments--we want a systems description of how the box itself works. What's in the box?  That's the theory we need, but it's not what we get from Hawkins, however elaborate the hierarchical vision of the cortex might be.
     To prove my point, we might put "our money where our mouth is" as they say, and take his entire theory and code it up in a software system that reproduces exactly the connections he specifies. What will happen? Well, the smart money is "not much", because such systems have in fact been around for decades in computer science (hierarchical models of inference, etc.), and we already know that the details at the level of systems don't provide the underlying juice--whatever it is--to actually reproduce thinking. If they did, the millions of lines of code generated from every imaginable conceptualization of intelligent machine thinking would have hit on it. Something more is going on than systems thinking. (As it turns out, Hawkins himself has largely proved my point. He launched a software company providing predictions of abnormal network events for security purposes using computational models inspired by his research on the Neo Cortex. Have you heard of the company? Me neither, until I read his web page. Not to be cruel, but if he decoded human intelligence in a programmable way, the NASDAQ would have told us by now.)
     I'm not picking on Hawkins. Let's take another popular account, this time from a practicing neuroscientist, all around smart guy David Eagleman. Eagleman argues in his 2012 Incognito that the brain is a "team of rivals", a theory that mirrors AI researcher Marvin Minsky's agents-based approach to reproducing human thought in his Society of Mind (1986), and later The Emotion Machine (2006). The brain reasons and thinks, claims Eagleman, by proposing different interpretations of sense data from our environment, and through refinement and checking against available evidence and pre-existing beliefs, allowing the "best" or "winning" interpretation to win out. Different systems in the brain provide different pictures of reality, and the competition among these systems yields stable theories or predictions at the level of our conscious beliefs and thoughts.
    This is a quick pass through Eagleman's ideas on intelligence, but even if I were to dedicate several more paragraphs of explanation, I hope the reader can see the same problem up ahead. One is reminded of Daniel Dennett's famous quip in his Cognitive Wheels article about artificial intelligence, where he likens AI to explanations of a magic trick:
                 "It is rather as if philosophers were to proclaim themselves expert explainers of the methods of a stage magician, and then, when we ask them to explain how the magician does the sawing-the-lady-in-half-trick, they explain that it is really quite obvious:  the magician doesn't really saw her in half; he simply makes it appear that he does.  'But how does he do that?' we ask.  'Not our department', say the philosophers--and some of them add, sonorously:  'Explanation has to stop somewhere.'

But the "team of rivals" explanation has stopped, once again, before we've gotten anywhere meaningful.  Of course the brain may be "like this" or may be "like that" (insert a system or model); we're searching for what makes the systems description work as a theory of intelligence in the first place.  "But how?" we keep asking (echoing Dennett).  Silence.

  Well, not to pick on Eagleman, either. I confess that I thoroughly enjoyed his book, at least right up to the point where he tackles human intelligence. It's not his fault. If someone is looking at the brain to unlock the mystery of the mind, the specter of Leibniz is sure to haunt them, no matter how smart or well informed. The issue with human intelligence is not the sort of thing that can be illuminated by poking around the brain to extract a computable "system"--the systems-level description gives us an impressive set of functions and structures that can be written down and discussed, but it's essentially an inert layer sitting on top of a black box. Again, it's what's inside the box that we want, however many notes we scribble on its exterior. There's a more precise way of stating my point here, to which we now turn. 
     Enter Michael Polanyi. The chemist and philosopher was known for political theorizing, scientific research (chemistry), and large, difficult treatises arguing against mechanical conceptions of science. In my view, he should also be known as supplying the key insights that shed light on why progress unlocking intelligence (or consciousness) with systems engineering is bound to fall short. Polanyi recognized that a lot of what makes us intelligent is what he called tacit knowledge, or knowledge that we know but can't express in language (including computer language). Human skills, he said, can't be captured by maxims or rules (though such rules may guide and assist someone who already possesses a skill). 
     It's a simple observation. Take riding a bike. If we try to describe how to ride a bike by providing some set of rules to follow, we'll have to buy extra band-aids, because no one can know how to ride a bike by following a set of rules. Same goes for other skills, like swimming, or (more interestingly) discovering new theories in science. There's no rule book. 
     What's interesting about this apparently straightforward and benign observation is it's deflating effect on so much of the current systems enthusiasm for explaining ourselves. Even if we are following some path through an elaborate (say hierarchical) system, we can't articulate this system into a symbolic representation without losing some of the magic sauce that lies at the core of intelligence. It's not that it's mystical, or non-physical necessarily (one need not be a Cartesian dualist), it's that it's not amenable to articulation, whatever it is. Yet, what else concerns the modern neuroscience or AI proponent, than an incessant, near obsessive desire to capture the system--to write down the set of rules, in other words. Polanyi's seemingly innocuous point about the non-articulable aspects of human thinking casts suspicion on such a project from the start. Intelligence, in other words, isn't something we can describe at the level of a (computable) system--a symbolic representation--any more than is something like consciousness. It suggests particular system descriptions perhaps (if we're lucky), but is not itself captured by them. 
     But if Polanyi is right, then the entire project of articulating a systems-level account of intelligence, such that some other rule-following artifact like a computer can reproduce it, is doomed. Again, if we use tacit (not symbolically representable) knowledge to act intelligently in our day to day lives--from riding a bike to discovering a cure for malaria--then attempts to symbolically represent intelligence will always leave something out. It's as if we have a beautiful car, with shiny wheels, brilliant paint, tinted windows... and no engine. The "engine" stays in the brain, frustratingly, while we articulate the complex systems surrounding it. To paraphrase philosopher Jerry Fodor, in his 2001 The Mind Doesn't Work That Way: it's no wonder our robots still don't work. If Polanyi is right, there's a good reason why. 
     So, like the consciousness tales told by very intelligent people, but still not signifying what we wish, the quest for Artificial Intelligence rages on and on, with no smart robots in sight. We're up against a fundamental limitation.  Well, so what?  We needn't despair of ever explaining ourselves somehow, or take refuge in glib techno-futurist predictions, divorced from reality (Futurist Ray Kurzweil has famously predicted that computers will completely reproduce minds by 2029, to take one particularly bombastic example of our capacity to remain impervious to deeper issues in AI.)  In fact seemingly deflating conclusions--if true--can often lead to better ideas tomorrow. It's not troubling or threatening to figure out a real limitation--it's progress. 
     Consider the history of science.  For instance, take the famous mathematical logician Kurt Godel.  Godel proved in 1931 his famous Incompleteness Theorems, putting to rest a long time dream of mathematicians to reduce mathematics to logic. Godel showed that any formal system complex enough to be interesting (including the Peano axioms for doing addition, basically) would have fundamental limitations--they can't be both consistent and complete. This means that mathematical thinking lies outside the scope of logical proof (and hence computation), no matter how complex the logical formalism one uses. Yet, far from shutting down further research in mathematics, it arguably paved a path to modern computation; Turing published his Halting Problem results based on Godel's result, and later provided the theoretical model (the Turing Machine) for universal computing machines. Not bad for a limiting result. 
     One might make similar observations about, say, Heisenberg's Uncertainty Principle, where either the position or the momentum of a photon can be measured, but not both. Again, a limitation. And again, an active area of research (Quantum Mechanics). So the question, to me, is not whether we're being psychologically depressing or retrograde if we acknowledge the core problems facing us unlocking the mysteries of thinking (as Chalmers once said about consciousness, you first have to feel the problem "in your bones"). A patina of impressive, neuroscience-informed systems theories may generate lots of journal publications, but the proof of a system is in its success, and I think we shouldn't be too sanguine about  successes given a serious appraisal of the challenges we face. Yet the history of science suggests that our limitations, once acknowledged, may in fact prove vastly more productive in the long run than continuing to make the same errors. We may, in other words, simply be on the wrong path. That's not a limitation, it's knowledge.

Friday, September 30, 2011

Those Pesky Humans: Urban Planning and its Discontents

Article first published as Those Pesky Humans: Urban Planning and its Discontents on Blogcritics.

Greg Lindsay writes in the New York Times that Pegasus Holdings, a technology company based in Washington, DC, is building a "medium sized town" on 20 square miles of New Mexico desert. The town, dubbed the "Center for Innovation, Testing, and Evaluation" (mark it on the map!), will contain infrastructure adequate to support a population of 35,000, but will be home only to a handful of engineers and other geeks from Pegasus, who plan to use it as a laboratory to build future "smart cities", where power grids, traffic, security and surveillance systems are monitored and controlled by computer.

On the face of it, "smart cities" sound like a good idea (better than, say, "dumb cities"). The idea is, in outline, simple enough: a) install sensors to get information about how people move about and interact in cities, then b) feed this data to computers develop complex models of human behavior, generating policies that make things work better, more efficiently. To take an obvious example, who wouldn't want traffic lights optimized to increase vehicle throughput? Or pedestrian pathways that make two-way foot traffic flow more smoothly? Makes sense, right?

Yet, as Lindsay points out, these seemingly innocuous examples paper over a broader project that has repeatedly been exposed as folly, that of trying to simulate the behavior of people in cities using abstractions like computer models, rather than by gaining an understanding of what people living in cities care about, and find valuable. These qualitative, subject elements are typically what determine what makes a great city "great", smart by computer modeling standards or not.

It would seem obvious and necessary to account for this "human-factor" when constructing quantitative models for smart city projects like Pegasus' (after all, we're talking about humans), only, as is so often the case, the computer geeks view "qualitative" features of a city as the very thing that needs to be analyzed quantitatively, and replaced. As Rober H. Brumly, managing director and co-founder of Pegasus pronounced, "We think that sensor development has gotten to the point now where you can replicated human behavior".

And so Brumly and the Pegasus visionaries, in this latest round of "machine versus man", continue the tradition of remaining seemingly ignorant of the manifest lessons of over-thinking urban planning going back decades, at least to the publication of the seminal "The Death and Life of Great American Cities" by flesh and blood New Yorker Jane Jacobs. Jacobs repeatedly documented how best laid urban plans would lead to frustration and a sense of alienation in the neighborhoods of New York City. For example, urban planners who attempted a gentrification project in a NYC slum decided that planting strips of grass outside tenements would have a salubrious effect. But alas, the pesky human tenants saw the grass strips as ridiculous, ill-placed, and insulting. It had the opposite effect, in other words, which could have been "predicted" had only the urban planners taken the time to understand the neighborhood, and get to know the tastes and circumstances of its inhabitants.

And there are more nefarious examples, like the 1968 RAND project to reduce fire response times in NYC, resulting in an estimated 60,000 fires in impoverished sections of New York, as "faulty data and flawed assumptions" triggered the replacement of fire stations in Brooklyn, Queens, and the Bronx with smaller ones. The coup de grace here was the politicization of the supposedly "scientific" project, where clever RAND officials, realizing that rich folk in well-to-do neighborhoods would not tolerate the effects of "efficiency" using their (flawed) simulations, placed such neighborhoods outside the scope of the project.

And on and on the story goes. Unintended consequences are simply part and parcel of the development of causal or predictive models using quantitative data gleaned from messy, complex systems. The real folly, however, in the Pegasus project and so many others like it, is not in the (basically correct) idea that quantitative analysis can provide useful information when devising strategies, for urban planning or otherwise, but that the human element can therefore be eliminated. That latter claim does not follow, and taking it too seriously will almost certainly guarantee that among the lessons we learn from the "Center for Innovation, Testing, and Evaluation", one of the most important is likely to be that innovation, testing, and evaluation is not enough.

Saturday, September 17, 2011

I Eat Yogurt, Therefore I Am

In the WSJ today, Jonah Lehrer, who dropped out of his Ph.D. program in neuroscience to make millions writing Barnes and Noble science books like "Proust Was a Neuroscientist", and "How We Decide", wrote a piece in the review section about eating yogurt and its connection to the mind-body problem. The basic idea is that yogurt makes you less anxious, because it contains probiotics, which contain GABA, a neurotransmitter that limits the effects of neurons. This is all true enough, I'm sure, just as its true that eating simple carbohydrates gives one a feeling of energy followed by a "crash". It's no mystery that the types of foods we eat affect how we feel. But it's quite a leap from this sensible factoid to conclusions about the nature of the mind--if it's distinct from the brain, or more generally our physical bodies. In fact Lehrer glosses over the pivotal conceptual conundrum, that all the gastronomic observations he or anyone else adduces in favor of theories about the nature of mind are consistent with theories that correlate mind and body, as well as those that identify them. C'mon Jonah, you surely most know this. Was it that hard to find something to say?

Wednesday, September 14, 2011

Cosmic Rays and Climate Change: Shhh!

I have no idea whether there's any scientific validity to the research conducted at the European Organization for Nuclear Research, aka CERN, on whether cosmic rays affect climate on Earth. What is interesting is the implication in Anne Jolis's September 7 article The Other Climate Theory, that researchers have long speculated that not just C02, but cosmic rays, may indeed change our climate. Where's this debate in the media? Roger W. Cohen, in a WSJ response to Jolis's article, claims that the Anthropogenic Global Warming (AGW) camp--the scientists who think that the primary cause of the warming Earth is human activity--is actually much smaller than how the media frames the debate, and in fact there is another school of thought among scientists that non-anthropogenic factors may be driving changes. In this "contrarian" school of thought, scientists tend to group into those interested in investigating the influence of cosmic rays, and those interested in the hypothesis that the Earth naturally and quickly changes temperature based on its own "unforced chaotic variations". Whatever the merits of these discussions, why haven't we heard them? That's a question even a non-atmospheric scientist can pose.

Monday, September 12, 2011

Oh, Right, I Have a Blog

I've been away on an extended hiatus as CEO of a software startup, which is still ongoing, but I just can't stay away from blogging any longer. I picked up a copy of the NYT times this morning at the Starbucks, which is proof positive that I'm missing the ole' blogging world.

So, I'll start out modestly with a phenomenon that I'm sure we're all familiar with, but is really kind of silly if one stops to consider...

The Token Door Shove... (dramatic music starts now)

This refers to the little polite "shove" we give the door as we're entering a building and it's closing on the person entering behind us. The TDS is silly because, more often than not, it makes zero difference to the life of the person behind us; in some cases it might actually make things nominally worse, as just opening the door afresh would be easier than attempting to coordinate the door grab post push. We just do this, of course, because it's a signal that we recognize the person behind us, which in general is a good thing. But, again, it's silly because it doesn't really matter. It masquerades as having some positive benefit, when in fact it's pure theater (if it actually does help in some particular case, great, but it's a knee jerk thing that we never figure in the first place, which is The Point). How many other benign actions do we engage in just to reassure those around us that we're polite members of civilization? How about, the little purse of the lips we give to passersby? Do you know the one? Ever so soft and reassuring, and completely useless, unless it's midnight in a shady part of town, of course, where it might give evidence that we're not in attack mode with, say, a rusty screwdriver concealed behind our back (but what if it's just facial expression subterfuge?). But if we're in the shady part of town, we wouldn't look at the passerby at all, would we? So, again, we're hell bent on appearing polite to each other, whether we're actually helpful to another, or not.

Friday, July 9, 2010

The Wisdom of Crowds

I had an interesting discussion with one of my (many) liberal friends recently, and there's a couple of points that came out of our prolonged exploration of every idea we could think of in the span of an evening. The first point worth sharing, I think, is as follows. The barnes and noble science section is packed with books explaining that we can't predict the future in systems that aren't governed by so-called normal distributions (though we think we can, a kind of persistent overconfidence in our epistemic abilities). The point here is that there's all of this complexity in everyday life and social life (think economics), and we're under the illusion that having a Ph.D. in economics and pointing to charts renders all of this moot. I'm talking about Taleb's "Black Swan", books like "The Drunkards Walk", everything that Malcom Gladwell has every written, et cetera. So the point is, it turns out that huge parts of the world--and interestingly, ordinary parts of the world, like society and culture, politics, economics--are effectively black boxes with respect to prediction. We really just don't know what tomorrow will bring. This has implications--huge implications--for the role of government or in general the role of experts in advising the rest of us on what courses of action should be undertaken. Much of this advice should properly be seen as speculation (there's even research that suggests that experts are actually worse at predicting outcomes in complex "human" systems than non-experts).

So that's point one. The other point is the "wisdom of crowds" notion, another concept that accounts for dozens of books in the BN science section (you know, where the smart-people-wanna-bes congregate). So this idea that many problems are solved by aggregating viewpoints, explored in books like, ah hum, The Wisdom of Crowds, Infotopia, Jeff Howe's Crowdsourcing, et cetera, sugggests that having some expert decide things can be really stupid. In fact it turns out that groups--crowds--can often arrive at more optimal solutions to problems then even some one person who is educated and expert on solving problems of that type. It turns out, for instance, that allowing people to "bet" on some future outcome often produces the best prediction of that outcome. Asking an expert to figure out the outcome would result in a worse prediction. Just ordinary Joes (and of course experts too), if there's enough of them, throwing down their money on which horse will win, or which companies to buy stock in, or which presidential candidate will win the election, often produces a better prediction on aggregate than the most expert horse person, or stock picker, or election pundit.

So this cutting edge research makes everyone feel all enlightened and egalitarian and up to date with the latest tidbits of "didn't you know?" science. Only thing is, this is the best, greatest empirical argument for free markets ever. Let ordinary people figure out what to buy, where to shop, how the economy should go, from the ground up, so to speak. Science suggests that this libertarian technique often results in more optimal solutions then central planners sitting in government offices. This really amuses me, because the folks feeling all educated reading wisdom of the crowds literature-- folks interested in social networking technology, reading Marx, sniffling about how republicans are idiots, eating tofu-- yes these folks are in fact reading powerful arguments for individual liberty, limited government, the wisdom in crowds, not government planners trying to engineer the Good Society for everyone else. (And they seem not to know it. Ha!) The latter just isn't optimal, if you believe the latest Gladwellesque arguments coming out of research on decision making.

Which brings me to my little wrap up. When I'm not sitting in cafes and I'm actually doing serious work, I'm reading Hayek, the Nobel economist who is widely credited as an intellectual precursor to libertarianism, particularly with regard to government involvement in the economy. Hayek, that Ph.D. egghead himself, who nonetheless argued (in my view persuasively) in the 1950s that because no one person can possibly know everything, the best strategy for a society is to vest more and more power in individuals. Hence individual liberty is a strategy that is most likely, over time, to result in more optimal solutions. Makes sense. The wisdom in crowds.

So the point is, again, that all of these insights emerging from the latest research in the social sciences are pointing back to non-government-controlled solutions to our most pressing problems. It's pointing to a model where government's most important job is to structure society in such a way that its citizens--all of us--can choose how best to live, what to buy, who to give money to, how to use our own money, and on and on. The aggregation of all of these individual voices makes things work better (right comrades?). But of course people who are free to choose and to decide much of their lives on their own means that disparities in wealth and natural talents will result in disparities in society. And that bothers all of my liberal friends. "Can't we just control people and liberate them too?" perhaps I heard (didn't they tell us in school? you have to control people to try to make them equal, since we're not naturally that way). But no I think unfortunately, on the aggregate, there's a better and worse way to do things. And on the aggregate, their will always be winners and losers in the crowd.

On Lawhatever James and Passion for World Cup

Labron James makes much ballyhooed anouncement on which organization he will work for, continuing to bounce a ball on a wood floor and throw it through a metal hoop to the adoration of millions. Don't Care. Not that I'm bothered by fame per se. I love Lady Gaga and she's a fame monster.

What other grumbly points can I make. Oh yes, I almost forgot. In the ongoing attempt for all enlightened Americans to outdo each other in showing the most self-deprecation, the Bay area is aflame with passion for the World Cup. Who gives a damn? Uruguay? I don't even know if I pronounced it right, and you know what, who cares? Uruguay? Yes it's vitally important that we all hang on the edge of our seats to see this world power at their finest hour (actually I think they lost). Interestingly, when I show a little American nationalism amongst my European friends here in Palo Alto, they seem strangely, ironically, to appreciate it. It's almost like everyone is thinking: "Americans, get some cojones! The world's superpower and your educated elite are tripping over themselves to appear embarrased by your success, and desperately trying to convince us that you're okay because you love watching Uruguay kick a ball around, clapping so loudly you make yourself look silly. We wouldn't do that, says the French man, smiling. We love France. Suckers!")