Here I am back writing on the thaxis. We will have to update this blog with all the ballyhoo around Deep Learning in AI, as beginning in about 2012 that approach came to dominate work on AI, which in 2020 (today) is now entirely clear. In fact, since I last wrote posted blogs on thaxis (ignoring two travel posts that have been relocated--more on that shortly) back in Spring 2014 in Chicago, I've written a book on AI called The Myth of Artificial Intelligence, which is currently sitting with the publisher awaiting edits, Harvard University Press. The book summarizes much of my various complaints and critiques here in thaxis, and I'm excited that it will be available everywhere for readers in (I hope) early 2021. Stay tuned!
On travel writing, I made another blog, Erik Travels The World. You can find it here: larsonoferik.wixsite.com/website. Enjoy!
Thursday, April 23, 2020
Sunday, May 11, 2014
Artificial Intelligence Is Back, Or Not.
Last year John Markoff of the NYT wrote about the rapid advance of Artificial Intelligence. This year, the New Yorker's Gary Marcus suggests that it's all hype. Not only hype, but hype again. Marcus, a cognitive psychologist at New York University, seems to have a point: how many times have we heard that the "smart robots" are just around the corner?
You Do Want To Express Yourself, Don't You?
Tumblr, it seems, is bucking the trend on the Web toward one size fits all. Tublr founder David Karp says the Web used to be chaotic and messy, to be sure, but also more fun, and personal. Back in the salad days of the Web (before, say, the early 2000s), the lack of standards meant more of your personality and creativity could be expressed on blogs and websites. Today, the utilitarian focus in the Valley, and the engineer's mindset of efficiency and control, have slowly squeezed out such possibilities, ironically rendering a Web obsessed with buzzwords like "personalization" must less personal.
Karp's not a lone voice here, either. Jaron Lanier, Virtual Reality pioneer and author of the 2010 hit You Are Not A Gadget argues the same point. In fact, Lanier's critique--first in YANAG then in his 2013 sequel Who Owns The Future?--is more trenchant than the milquetoast remarks from Karp bemoaning the cookie-cutter trends on the modern Web. Lanier, for instance, thinks that so-called Web 2.0 designs favor machines and efficiency, literally at the expense of "personhood" itself. For Lanier, the individual creator has no real home on the Web these days, as sites like Facebook force people to express their personalities in multiple choice layouts and formatted input boxes. To Lanier, these surface designs evidence even deeper attacks on personhood, like redefining the very notion of "friend" to something shallow and unimportant. The Web is dominated by a "hivemind" mentality where no one person really matters, and the collective is serving some greater purpose, like building smarter machines. The Web, concludes Lanier, is set up to capture the machine-readable features of people for advertising purposes and other demeaning anti-humanist ends, not to enlarge and empower them as individuals.
Still, if you're a Lanier fan, Mr. Karp's remarks seem headed in the right direction, even if only because you can now customize the look and feel of your Tumblr blog on your mobile device. But the deeper question here is whether you have anything much to say on a blog in the first place, and whether the Web environment is cordial and prepared to hear it, if you do. Maybe Tumblr's counter-steer is minimal, but one hopes that further and more meaningful changes are still to come.
Friday, April 18, 2014
Rethinking Technological Determinism
Consider the case of Albert Einstein. His now famous paper on Special Relativity, published in 1905, had such a seismic impact on our understanding of physics that it--along with his theory of General Relativity--eventually razed the entire edifice of classical mechanics, dating back to that other genius, Isaac Newton. Relativity, as we now know, replaced Newton's laws governing objects traveling at constant velocity (special relativity) and undergoing acceleration (general relativity). Back to 1905, in what's come to be called the "Anna mirablis", or "miracle year", Einstein published fully three other groundbreaking papers before the year's end, as well. Taken together, they rewrote out basic ideas about the fundamental workings of the physical universe. What a year it was. Working at a patent office and without "easy access to a complete set of scientific reference materials", and additionally burdened by a dearth of "available scientific colleagues to discuss his theories", Einstein nonetheless plodded along in what today would be an almost ludicrously information-poor environment -- no Google scholar, no Facebook, no Twitter feeds--, publishing his landmark theories on the photoelectric effect, Brownian motion, mass-energy equivalence, and special relativity. Collectively, the papers were to form the two central pillars of the New Physics: Relativity and Quantum Theory.
How did he do this, with so little to work with? Contrast the case of Einstein--lowly Einstein, sitting in the patent office, cut off from vast repositories of other thinkers' theories-with much of the discussion these days about smarts and information. I call today the age of "hyper-information." Much of the hyper-information buzz comes out of Silicon Valley and is (of course) tied to "the Internet." The Internet-visionaries (and here "Internet" as a "thing" is itself misleading, but this is another discussion) believe that we're getting "smarter", and technology itself is imbued with "smarts" and gets smarter, and that the world is changing for the better, and rapidly. It's progress, everywhere. The implication is that everyone will be an Einstein soon (or at least their best, most informed selves), and even our machines will be Einstein's too. Kevin Kelly (of Wired fame) writes about the "intelligenization" of our world, and the famous (infamous?) entrepreneur/visionary Ray Kurzweil explains how we're even getting smarter in a quantifiable, measurable way: we can graph innovation and intellectual endeavor as an exponential curve, the "Law of Accelerated Returns." All this exciting talk, and we might be tempted to think Einstein's are springing up everywhere these days, too. But no, unless I'm missing something. On might be tempted, sure, but a cursory tour through today's "Internet" culture will quickly disabuse one of this notion (Tweet, anyone?).
But why not? Why not truly big ideas, springing up everywhere, with all this super-information-guided-intelligence in the air? Indeed, why do we still talk about "Einstein" at all? (He's so, 1905.) There's an easy answer, namely that "Einstein's don't come along too often in any era, so we'll just have to wait." It's a good one, I think, but it's an uncomfortable fit with the exponential curve rhetoric about our rapidly "intelligenized" (that's not even a word!), hyper-informed world. (Nassem Taleb, author of The Black Swan, suggested that we'd make better predictions in economics if we stopped reading all the "just-in" news sources.) "Well, just how soon can we expect another Einstein-type genius? We have problems to solve, after all! Is he right around the corner, in a patent office (or maybe he's a Sys Admin), as yet undiscovered?"
In fact the silliness of this inquiry belies an underlying theme that I think is a bit more troubling (and serious). For, many of the Technorati--the digital visionaries and know-it-alls of all things technological--don't really want to discuss individual genius anymore. The game has changed, and it's telling what's been left out of the new one. Kelly goes so far to explain that "individual innovation" is a myth; what we call innovation is really the deterministic evolution of ideas and technology--it's predictable. Whereas philosopher of science Karl Popper argued that technological innovation is intrinsically unpredictable (as is economic or social prediction), inventor Ray Kurzweil puts it on a graph. And Kurzweil argues that it's not just the pace of technological change, but the nature of the innovations themselves that we can predict. We know, for instance, that computers will be smart enough by 2040 (give or take) to surpass human intelligence ("intelligence" is always left poorly or un-analyzed, of course, and it's a safe bet that what's ignored is whatever Einstein was doing). From there, we can pass the torch for innovating and theorizing to machines, which will crank out future "Einsteins", as machines will be smarter than all of the humans on the planet combined (Kurzweil really says this). In other words, our "hyper-information" fetish these days is also a deliberate refocus away from singular human brilliance to machines, trends, and quantification. Worrying about Einstein is yesterday's game; today, we can be assured that the world will yield up predictable genius in the form of the smart technology of the future, and from there, we can predict that superintelligent solutions and ideas will emerge, answer any pressing problems that remain.
But is this the way to go? Is it just me, or does this all sound fantastical, technocratically controlling (surely not an "Einstein" virtue), and well, just plain lazy and far-fetched? Consider: (1) where is the actual evidence for the "intelligenization" of everything, as Kelly puts it? What do we even mean here by intelligence? We need need good ideas, not just more information, so how do we know that all of our information is really leading to "smarter"? (Sounds like a sucker's argument, on its face.) (2) As a corollary to (1): in a world where we are quantifying and measuring and tracking and searching everything at the level of facts, not ideas, who has time to do any real thinking these days, anyway? Here we have the tie-in to Kelly and his "intelligenization" meme: how convenient to simply pronounce all the gadgets around us as themselves intelligent, and thereby obviate or vitiate the gut feeling that thinking isn't happening in ourselves as much. So what that we're all so un-Einstein like and acting like machines? Smarts are now in our devices, thank God. (And, murmured perhaps: "This has to rub off on us eventually, right?") And finally (3), is automation--that is, computation--the sort of "thinking" we need more, or less, of today? Might computation sometimes--occasionally, maybe frequently--in fact be at odds with the kind of silent, unperturbed contemplation that Einstein was doing, back in 1905, with his information-poor environment (even by the standards of his day)? Could it be?
In fact history is replete with examples such as Einsteins, all suggesting that the "hyper-information" modern mindset is apples-and-oranges with the theoretical advances we require in human societies to make large, conceptual leaps forward, embracing new, hitherto unforeseen possibilities. In astronomy, for instance, the flawed Ptolemaic models using perfect circles and devices like epicycles, equants, or [] were maintained and defended with vast quantities of astronomical data about the positions of planets (mainly). No one bothered to wonder whether the model was itself wrong, until Copernicus came along. Copernicus wasn't a "data collector" but was infused with an almost religious passion that a heliocentric model would simplify all the required fudges and calculations and data manipulations of the older Ptolemaic systems. Like Einstein, his idea was deep and profound, but it wasn't necessarily information-centric, in today's sense. The complex calculations required by Ptolemaic astronomers were impressive at the shallow level of data or information manipulation (as orbits were actually ellipses, not perfect circles, the numerical calculations required to predict movements of the known planets were extremely complex), but they missed the bigger picture, because the underlying conceptual model was itself incorrect. It took someone sitting outside of all that data (though he surely was knowledgeable about the Ptolemaic model itself) to have the insight that led to the (eponymous) Copernican Revolution.
Similarly with Newton, who was semi-sequestered when working out the principles of his mechanics (Newton's three laws including his law of universal gravitation). Likewise with Galileo, who broke with the Scholastic tradition that dominated 16th century Europe at the time, and was thinking about Democritus, an almost forgotten pre-socratic philosopher that predated Aristotle and had a negligible role in the dominant intellectual discussions of the day. []
Fast forward to today. The current fascination with attempting to substitute information for genuine knowledge or insight fits squarely in the Enlightenment framework promulgated by later thinkers like Condorcet, and others. In fact, while true geniuses like Copernicus or Galileo or Newton gave us the conceptual foundations (and one must include Descartes here, of course) for an entire worldview shift, the Enlightenment philosophes immediately began constructing a shallower rendition of these insights in the Quantifiable Culture type of thinking that reduced complex, human phenomena to quantifiable manipulations and measurements (can someone say, "mobile phone apps"?). Whereas Copernicus was infused with the power and awe of a heliocentric universe (and the Sun itself was an ineffable Neo-Platonic symbol of divinity and intelligence), the Condorcet's of the world became enamored with the power and control that quantifying everything could confer upon us, at the level of society and culture. And today, where a genius such as Alan Turing offered us insights about computable functions--and also proved the limits of these functions, ironically, even as he clarified what we mean by the "computation"--we've translated his basic conceptual advances into another version of the shallow thesis that culture and humanity is ultimately just numbers and counting and technology. In this type of culture, ironically, new instances of genius are probably less likely to recur ("numbers and control and counting" ideas aren't the sort of ideas that Einstein or Newton trafficked in, mathematical similarities here are a mere surface patina). Indeed, by the 18th century the interesting science was again on the "outside" of the Enlightenment quantification bubble, in the thinking of Maxwell, Hamilton, and iconoclasts that gave rise to what we now call the "Romantic Age of Science", such as Joseph Banks. This, so soon after Condorcet proclaimed that all questions would be answered (and of course from within the framework he was proposing). And so it's a clever twist to go ahead and explain away human ingenuity, genius, and all the rest as Kelly et al do, by striking derogatory, almost antagonistic attitudes toward human innovation. What bigger threat to technological determinism exists, than a mind like that of Einstein's? Better to rid the world of his type, which is in essence what Kurzweil's Law of Accelerated Returns attempts to accomplish, by ignoring the possibility of radical innovation that comes from truly original thinking.
We've seen this modern idea before--it's scientism (not science), the shallower "interpretations" of the big ideas that regrettably step in and fashion theories into more practical and controlling policies. Unfortunately, these "policies" also reach back into the deeper wellspring out of which truly original ideas originate, and attempt to rewrite the rules of the outsiders--the Einstein's and all the rest--so that they no longer pose a threat to a brave new world, a world of hyper-information, today. This misguided human motive can reach such absurdity that indeed, the very possibility of original thought is explained away, as with Kelly. To say that such a collection of views about human culture is selling us short is putting it mildly, indeed.
The point here is that flourishing culture inspires debate, original thinking, and a diversity of ideas. In today's hyper-information world, neutrality and objectivity are illusory. So, I fear, is genuine thinking. When our thought leaders proclaim technological determinism, and provide us with such impoverished viewpoints that discussions of Einstein are deemed quaint and simply irrelevant in the coming techno-utopias, we're committing the fallacy once again of fashioning technocratic, debate-chilling policies and worldviews out of deeper and more profound original thoughts. We ought instead to ask the hard questions about how we change history (and what needs to be changed), rather than produce bogus graphs and pseudo-laws (like the Law of Accelerated Returns), ignoring history and bigger ideas. We need these ideas--and the thinkers who give birth to them. The future is open, and it's ours to make: "Who should we be?" And: "how shall we live?" These are the questions that every generation asks anew, and it's the current generation's task to reveal the full spectrum of possibilities to the next, or to allow for enough inefficiency and dissent that new possibilities can emerge. We can't make Einsteins (Kurzweil's ideas notwithstanding), but we can make a culture where a sense of possibility is an intellectual virtue, and a humility about the future is a virtue, too, rather than ignorance or resistance to the supposed march of technological progress. As always, our "progress" lies in our own, unexplored horizons, not in our tools. Just ask Einstein.
Friday, April 4, 2014
Mankind in Transition
A strange, almost creepy book by Masse Bloomfield, a guy I'd never heard of before. He offers a usable description of technology, from tools, to machines, to automation. Argues that our destination is interstellar travel (in contrast to someone like Kurzweil, who would argue that our destination is the Machineland itself). His view of robots, powered by Artificial Intelligence, appears to preserve the difference between biologically-inspired minds and machines. He sees AI as necessary for controlling robots, which will become ubiquitous in military applications. This suggestion seems entirely plausible, actually (set aside the ethical dimensions here), and his view of artificial intelligence as the software that controls robots for practical uses seems, well, plausible too.
Smarter Than You Think
Clive Thompson is fed up with all the cautionary digital talk these days. A book arguing that the Internet is making us all smarter. Take that, Nicholas Carr.
Rapture for the Geeks
A bit dated, but here's an excellent article on the links between checking your fav mobile device and addiction. Speaking of dated, here's a NYT article going all the way back to 2000 on the connection between computer networks and biological systems (both evolving, as it were). I found these references in the chapter notes of a quirky but eminently readable little book called Rapture for the Geeks, by Richard Dooling. The subtitle is "When AI Outsmarts IQ" and its a semi-tongue-in-cheek look at Strong AI and visions of the future like the Singularity.
On the first article, you can Google "mobile phones and addiction" or what have you and get the gist. Most of the discussion is wink-and-nod; I'm sure there are some serious studies out there. The Huff Post talks about it here. Whether compulsively checking or paying attention to your mobile is an "addiction" inherits all the baggage of talking about "addiction", but it's clear enough these days that many of us would be happier if we engaged in that sort of behavior less often.
On the second article, it seems there's a general, somewhat ill-defined notion out there that computational networks are evolving, and similarly (in some sense) to biological networks. Or, rather, that the concept of evolution of complex systems is general enough to include technological evolution (of which the digital technology and especially the Internet is a subset). This is a beguiling notion when viewed from afar, but when you zoom in on it, hoping for clarity, it's tough to determine the meat and potatoes of it all. How is this system evolving like that one? one is tempted to ask. Or rather, if "evolution" is generic, what does it cover, then? What doesn't evolve? To nutshell all of this, say it thusly: in what interesting, non-trivial, sense is technology evolving like we think biological species have (and are)?
Naturally skeptical of all the geek-rapture about smart machines, my hunch is that there's no real there there. Technology is simply getting--well, we're getting more and more of it, and it's more and more connected. And that's about all there is to the idea, when you analyze it with something like a thoughtful and healthy skepticism. Nothing wrong with that; we could use more of it these days, it seems.
On the book, I dunno. Read it if you're interested in the whole question of whether machines are becoming intelligent like humans.
On the first article, you can Google "mobile phones and addiction" or what have you and get the gist. Most of the discussion is wink-and-nod; I'm sure there are some serious studies out there. The Huff Post talks about it here. Whether compulsively checking or paying attention to your mobile is an "addiction" inherits all the baggage of talking about "addiction", but it's clear enough these days that many of us would be happier if we engaged in that sort of behavior less often.
On the second article, it seems there's a general, somewhat ill-defined notion out there that computational networks are evolving, and similarly (in some sense) to biological networks. Or, rather, that the concept of evolution of complex systems is general enough to include technological evolution (of which the digital technology and especially the Internet is a subset). This is a beguiling notion when viewed from afar, but when you zoom in on it, hoping for clarity, it's tough to determine the meat and potatoes of it all. How is this system evolving like that one? one is tempted to ask. Or rather, if "evolution" is generic, what does it cover, then? What doesn't evolve? To nutshell all of this, say it thusly: in what interesting, non-trivial, sense is technology evolving like we think biological species have (and are)?
Naturally skeptical of all the geek-rapture about smart machines, my hunch is that there's no real there there. Technology is simply getting--well, we're getting more and more of it, and it's more and more connected. And that's about all there is to the idea, when you analyze it with something like a thoughtful and healthy skepticism. Nothing wrong with that; we could use more of it these days, it seems.
On the book, I dunno. Read it if you're interested in the whole question of whether machines are becoming intelligent like humans.
Subscribe to:
Posts (Atom)