In the last section, we surveyed the rise of search, focusing on (who else?) Google, and saw how Google's insight about human judgments in HTML links propelled Web search into the modern era. In this vein, then, we can see the beginning of the entire social revolution (roughly, from Web 1.0 to Web 2.0 and on) as a story of the beginning of "real" Web search with Google's PageRank idea. Yet we ended this feel-good section back where we started, with all the original worry about the Web making us stupid, a view given recent voice by folks like Carr and Lanier, and even more recently with the latest The Atlantic Cities article on the dangers of photo sharing; fretting now about our memories and memory formation in the Instagram-age (always, alas, worried about our brains online). What gives? This is our question.
Before answering it, though, it'll be helpful to review the general landscape we've been traversing. Back to the beginning, then, we have:
(1) Increasingly, smart people are worrying about the downside of modern technological culture (basically, "Web culture"). Indeed, studies now emerging from cognitive psychology and neuroscience suggest that there's a real, actual threat to our cognitive selves on the Web (our brains and brain activities like memory, attention, and learning).
(2) As a corollary of (1), the picayune dream of something like instrumentalism--we use a technology as we wish, and it doesn't really change us in the process--is almost certainly false with respect to Web culture.
(3) From (1) and (2), the Web seems to be changing us, and not entirely (or even mostly, depending on how moody one is) for the better.
(4) But the Web seems like the very paragon of progress, and indeed, we've been at pains in the last section to explain how the Web (or Web search with Google) is really all about people. It's all about people-smarts, we've argued, and so how can something about us turn out to be bad for us? Isn't the "Web" really just our own, ingenious way of compiling and making searchable and accessible all the content we think and write and communicate about, anyway?
(5) And so, from (1)-(4), we get our question: what gives?
That's our summary, then. And now we're in a position to address (5), or at least we've got enough of a review of the terrain to have a fresh go at it now. To begin, let's make some more distinctions.
More Distinctions (or, Three Ways the Web Might be Bad). These are general points about Web culture, and we might classify them roughly as (1) Bad Medium (2) Distracting Environment, and (3) Trivial Content.
(1) Bad Medium
For years, people have noted in anecdotes and general hunches or preferences the differences between physical books and electronic Web pages. Back in 2000, for instance, in the halcyon days of the Web, noted researchers like John Seeley Brown (who admittedly worked for Zerox) and Paul Diguid argued in The Social Life of Information that "learning" experiences from printed material seem to be of a qualitatively different sort then "learning" experiences we get from reading lighted bits on an artificial screen. Books, somehow, are more immersive; we tend to engage a book, where we're tempted reading text on a Web page to skim, instead. We might call this an umbrella objection to taking the Web too seriously, right from the get go, and I think there's some real teeth in it. But onward...
(2) Distracting Environment
Much of Carr's points in his original Atlantic article "Is Google Making Us Stupid?" and later in his book The Shallows are (2) type objections. Roughly speaking, you can view Carr's point (and the research he points to that suggests his point is valid) as something akin to the well-known psychological result that people faced with endless choices tend to report less intrinsic satisfaction in their lives. It's like that on the Web, roughly. If I can read my email, take in a number of tweets, get Facebook updates, field some IM, and execute a dozen searches all in fifteen minutes, it's hard to see in practical terms how I'm doing anything, well, deep. Any real cognitive activity that requires focus and concentration is already in pretty bad straights in this type of I-can-have-anything-all-the-time information environment. And, again, for those tempted to play the instrumentalist card (where we argue that in theory we can concentrate, we just need to discipline ourselves online), we have a growing number of brain and behavioral studies surfacing that suggest the problem is actually intrinsic to the Web environment. In other words, we can't just "try harder" to stay on track (though it's hard to see how this would hurt); there's something about our connection to information on the Web that actively mitigates against contemplation and concentration of the kind required to really, thoroughly engage or learn something. As Carr summarizes our condition, we're in The Shallows. And since we're online more and more, day after day, we're heading for more shallows.
(3) Trivial Content
Much of Lanier's arguments in his You Are Not a Gadget are explorations of (3). Likewise, someone like former tech-guy Andrew Keen advances objections of the Trivial Content sort in his The Cult of the Amateur. As I think Lanier's observations are more trenchant, we'll stick mostly to his ideas. Trivial Content is really at the heart of what I wish to advance in this piece, actually, so to this we'll turn in the next section.
Whoops! Idiocracy
Monday, December 30, 2013
Enter "Search"
You can throw around some impressive numbers talking about the Web these days: a trillion Web pages (so says Wired founder Kevin Kelly), and as of this writing 1.59 billion of them indexed on search engines. Google, of course, is the story here--as much today as a decade ago. When the company debuted it's "BackRub" search engine on Stanford University's servers back in the late 1990s, within a year the software was VC funded and moving out of its academic roots and into commercial tech stardom. Since then, many of the needle-in-haystack worries about finding information on the exponentially growing World Wide Web have become largely otiose. Why? Because, generally speaking, Google works.
But like many great ideas, the Google recipe for Web search is somewhat paradoxical. On the one hand, Google--as a company and as a search technology--is the paragon of science, engineering, and numbers. Indeed, the math-and-science ethos of Google is part of it's corporate culture. Visit the Googleplex--the sprawling campus in Mountain View, California where Google is headquartered--and you'll get a sense that everything from employee work schedules to seemingly cosmetic changes on its homepage to geek-talk about algorithms is subject to testing, to numbers. Google is data-driven, as they say. Data is collected about everything--both in the company and on the Web--and then analyzed to figure out what works. Former CEO Eric Schmidt remarked once, tellingly, about his company that "in the end, it's all just counting." And it is, of course.
On the other hand, though, what propelled Google to stardom as a search giant (and later as an advertising force) was the original insight of founders Larry Page and Sergei Brin--two Stanford computer science students at the time, as we all now know--that it's really people, and not purely data, that makes Google shine. PageRank, coined after it's inventor Larry Page, is what made Page's pre-Google "BackRub" system so impressive. But PageRank wasn't processing words and data from Web pages, but rather links, in the form of HTML back-links that connected Web page to Web page making the Web, well, a "web."
Page's now famous insight came from his academic interests in the graph-theoretic properties of collections of academic journal articles connected via author references, where the quality of a particular article could be judged by (roughly) examining references to it from articles with authors having known authority and credentials on the same topic. Page simply imagined the then-nascent World Wide Web as another collection of articles (here: Web pages) and the HTML links connecting one to the other as the references. From here, the notion of "quality" implicit in peer-reviewed journals can be imported into the Web context, and he had the germ of a revolution in Web search.
Of course it worked, and almost magically well. When Page (and soon Brin) demo'd the BackRub prototype, simple queries like "Stanford" or "Berkeley" would return the homepages of Stanford University or The University of California at Berkeley. (Yes, that's pretty much it. But it worked.) It's a seemingly modest success today, but at the time, Web search was a relatively unimportant, boring part of the Web that used word-frequency calculations to match relevant Web pages to user queries. Search worked okay this way, but it wasn't very accurate and it wasn't very exciting. Scanning through pages of irrelevant results was a commonplace.
Most technologists and investors of the day therefore pictured search technology as a mere value-add to something else, and not a stand alone application per se. The so-called portal sites like Yahoo!, which used human experts to collect and categorize Web pages into a virtual "mall" for Web browsers and shoppers were thought to be the present and future of the Web. Search was simply one of the offerings on these large sites.
But the human element used by Yahoo! to classify Web pages was much more powerfully captured by Page and Brin algorithmically--by computer code--to leverage human smarts about quality to rank Web pages. And this is the central paradox--while Google became the quintessential "scientific" company on the Web, it leaped to stardom with an insight that was all too human--people, not computers, are good at making judgments about content and quality. And of course, with this insight, the little BackRub system bogging down Stanford's servers quickly became the big Google search giant. Suddenly, almost over night, search was all the rage.
Putting it a bit anachronistically, then, you could say Google was, from the beginning, a social networking technology--or at least a precursor. The idea that the intelligence of people can be harnessed by computation led to more recent tech "revolutions" like Web 2.0. For instance, in tagging systems like de.licio.us (now owned by Yahoo!), users searched people generated tags or "tagsonomies" of Web pages. Tagging systems were a transitional technology between the "Good Old Fashioned Web" of the late 1990s with its portal sites and boring keyword search (like Yahoo!), to a more people-centered Web where what you find interesting (by "tagging" it) is made available for me, and you and I can then "follow" each other when I discover that you tag things I like to read. Once this idea catches on, social networking sites like My Space and later Facebook are, one might say, inevitable.
So by the mid-2000s, user generated content (UGC) like the earlier de.licio.us, a host of user-driven or "voting" sites like Digg (where you could vote for or "digg" a Web page submitted on the site), and large collaboration projects like Wikipedia were simply transforming the Web. Everywhere you looked, it seemed, people were creating new and often innovative content online. As bandwidth increased, visual media sites for sharing photos and videos (e.g., YouTube) emerged, and within it seems months, becoming major Web sites. And as Web users linked to all of this UGC, and Google's servers indexed it, and it's PageRank-based algorithms searched it by exploiting the human links, Google's power was growing by almost Herculean proportions. Like the Sci-Fi creature that gets stronger from the energy of the weapons you use to shoot it, every fad or trend or approach that took fire on the Web translated ineluctably into a more and more powerful Google. By the end of the 2000s, it seemed every person on the planet with an Internet connection was "googling" things on the Web, to the tune of around 100 billion searches per month.
Excepting, perhaps, the idea of a perfect being like God, every other idea has its limits, and Google is no exception. Enter, again, our troubling question: how, if the Web is driven increasingly by human factors, and Google leverages such factors, can Google be making us stupid (as Carr puts it)? Why need we be assured we're not "gadgets" (as Lanier puts it)? If all this tech is really about people anyway, what gives? "What gives?" is a good way of putting things, and it's to this question that we now turn.
But like many great ideas, the Google recipe for Web search is somewhat paradoxical. On the one hand, Google--as a company and as a search technology--is the paragon of science, engineering, and numbers. Indeed, the math-and-science ethos of Google is part of it's corporate culture. Visit the Googleplex--the sprawling campus in Mountain View, California where Google is headquartered--and you'll get a sense that everything from employee work schedules to seemingly cosmetic changes on its homepage to geek-talk about algorithms is subject to testing, to numbers. Google is data-driven, as they say. Data is collected about everything--both in the company and on the Web--and then analyzed to figure out what works. Former CEO Eric Schmidt remarked once, tellingly, about his company that "in the end, it's all just counting." And it is, of course.
On the other hand, though, what propelled Google to stardom as a search giant (and later as an advertising force) was the original insight of founders Larry Page and Sergei Brin--two Stanford computer science students at the time, as we all now know--that it's really people, and not purely data, that makes Google shine. PageRank, coined after it's inventor Larry Page, is what made Page's pre-Google "BackRub" system so impressive. But PageRank wasn't processing words and data from Web pages, but rather links, in the form of HTML back-links that connected Web page to Web page making the Web, well, a "web."
Page's now famous insight came from his academic interests in the graph-theoretic properties of collections of academic journal articles connected via author references, where the quality of a particular article could be judged by (roughly) examining references to it from articles with authors having known authority and credentials on the same topic. Page simply imagined the then-nascent World Wide Web as another collection of articles (here: Web pages) and the HTML links connecting one to the other as the references. From here, the notion of "quality" implicit in peer-reviewed journals can be imported into the Web context, and he had the germ of a revolution in Web search.
Of course it worked, and almost magically well. When Page (and soon Brin) demo'd the BackRub prototype, simple queries like "Stanford" or "Berkeley" would return the homepages of Stanford University or The University of California at Berkeley. (Yes, that's pretty much it. But it worked.) It's a seemingly modest success today, but at the time, Web search was a relatively unimportant, boring part of the Web that used word-frequency calculations to match relevant Web pages to user queries. Search worked okay this way, but it wasn't very accurate and it wasn't very exciting. Scanning through pages of irrelevant results was a commonplace.
Most technologists and investors of the day therefore pictured search technology as a mere value-add to something else, and not a stand alone application per se. The so-called portal sites like Yahoo!, which used human experts to collect and categorize Web pages into a virtual "mall" for Web browsers and shoppers were thought to be the present and future of the Web. Search was simply one of the offerings on these large sites.
But the human element used by Yahoo! to classify Web pages was much more powerfully captured by Page and Brin algorithmically--by computer code--to leverage human smarts about quality to rank Web pages. And this is the central paradox--while Google became the quintessential "scientific" company on the Web, it leaped to stardom with an insight that was all too human--people, not computers, are good at making judgments about content and quality. And of course, with this insight, the little BackRub system bogging down Stanford's servers quickly became the big Google search giant. Suddenly, almost over night, search was all the rage.
Putting it a bit anachronistically, then, you could say Google was, from the beginning, a social networking technology--or at least a precursor. The idea that the intelligence of people can be harnessed by computation led to more recent tech "revolutions" like Web 2.0. For instance, in tagging systems like de.licio.us (now owned by Yahoo!), users searched people generated tags or "tagsonomies" of Web pages. Tagging systems were a transitional technology between the "Good Old Fashioned Web" of the late 1990s with its portal sites and boring keyword search (like Yahoo!), to a more people-centered Web where what you find interesting (by "tagging" it) is made available for me, and you and I can then "follow" each other when I discover that you tag things I like to read. Once this idea catches on, social networking sites like My Space and later Facebook are, one might say, inevitable.
So by the mid-2000s, user generated content (UGC) like the earlier de.licio.us, a host of user-driven or "voting" sites like Digg (where you could vote for or "digg" a Web page submitted on the site), and large collaboration projects like Wikipedia were simply transforming the Web. Everywhere you looked, it seemed, people were creating new and often innovative content online. As bandwidth increased, visual media sites for sharing photos and videos (e.g., YouTube) emerged, and within it seems months, becoming major Web sites. And as Web users linked to all of this UGC, and Google's servers indexed it, and it's PageRank-based algorithms searched it by exploiting the human links, Google's power was growing by almost Herculean proportions. Like the Sci-Fi creature that gets stronger from the energy of the weapons you use to shoot it, every fad or trend or approach that took fire on the Web translated ineluctably into a more and more powerful Google. By the end of the 2000s, it seemed every person on the planet with an Internet connection was "googling" things on the Web, to the tune of around 100 billion searches per month.
Excepting, perhaps, the idea of a perfect being like God, every other idea has its limits, and Google is no exception. Enter, again, our troubling question: how, if the Web is driven increasingly by human factors, and Google leverages such factors, can Google be making us stupid (as Carr puts it)? Why need we be assured we're not "gadgets" (as Lanier puts it)? If all this tech is really about people anyway, what gives? "What gives?" is a good way of putting things, and it's to this question that we now turn.
Wednesday, December 18, 2013
Continuation of Things Past
The prior post is rough and this one promises to be choppy. Some notes towards an article, that's all.
Deconstructing the Web
(1) The Web paradox is something like: once you start treating people like information processing systems--and I'll explain how this works with the cognitive-social model on the Web--"deeper" and core creative intellectual acts lie outside your scope. So the paradox is that all the information at your finger tips leads, in the end, to having less knowledge. It's sort of like a law of human thinking, some comparison at least metaphorically to a law of thermodynamics, where you can't get something for free. You want lots of information? You have the Web. You want, as Carr puts it, concentration and contemplation? You have to get off of the Web.
(2) None of this really matters--even if you accept the thesis here--if you have an instrumentalist view of technology; you won't see the danger or the problem. But part of my argument is that there is no such thing as instrumentalism; the Web is paradigmatically non-instrumentalist. In fact, you can go "realist" about the non-instrumentalism of the Web and point to actual brain science: our brains are literally changing. So it's not a philosophical debate. It's true.
(3) Getting all the positives of endless information without succumbing to the underlying cognitive-social information processing model is the Big Question. There are two ways to approach this.
(a) Introduce a distinction between Web use and "full" or "natural" human thought and action. A good example here is the distinction between using a network to discover a physical book (say, on Amazon), and actually reading and absorbing what the book says (say, by buying it and then reading it in the physical world).
(b) Change the Web. This is an intriguing possibility, and I think there are a number of promising routes here. Most of the thoughts I have on this matter involve a principle I "noticed" a few years ago on expertise. Call it the "natural world" principle or I'll think of a better title, but here are some examples to motivate it:
(1) Someone writes a blog about driving Highway 101, which he does every summer.
(2) Someone writes a review on Yelp about the French cafe in the Mission District in San Francisco, and the reviewer spent the afternoon at the cafe just last week.
(3) Someone writes an article on Heisenberg's Uncertainty Principle or Sartre's Being and Nothingness on Wikipedia, and the person has a degree in mathematics or physics or just took a course on French Existentialists at the University of Kentucky (or wherever).
Revolution Cometh
In all of these examples, there's a principle of knowledge at work, and underlying this principle, there's one of, say, effort. Someone did some actual work in every example. For instance, the fellow with the travel blog actually drove the highway (it's long, it takes time). Or, the customer at the cafe actually went there, and sat down, and ordered an Espresso and a Croissant. The effort principle underlies the knowledge principle because, well, it takes effort to know things about the world. And whenever people know things about the world and translate this knowledge into bits of information online, like with all communication we can learn (if not experientially, at least cognitively) from those bits, by reading them. In this guise nothing is really that different than fifty years ago; it's like looking at Microfiche, say. Doing research. Learning.
But the effort principle is inextricably tied to the knowledge principle, and this is where this model departs from the current Web model. For instance, something like "Web 2.0", or what Lanier pejoratively calls the "hive mind", pulls the effort and knowledge principles apart. Here, a bunch of anonymous "Web resources" (people online) all chip in little bits of effort to make a finished product. Like, say, a Wikipedia entry. The big fallacy here is that there's something from nothing--no one ever really knows a ton about quantum mechanics, or atheistic existentialism. The focus here is not on what an individual might know (an "expert") but rather on what many anonymous non-experts might collectively "know." And this is where all the trouble starts; for the information processing model that gives rise to the negative conclusions of a Carr or a Lanier (or a New Yorker article about Facebook) is ideally suited to the cognitive-social model that ignores physical-world-expertise and the effort it takes in favor of anonymous Web resources. If information is processed, hive-like, by so many resources, then--like any information processing device--the process is what ultimately matters, not the knowledge from experts. Expertise emerges, somehow, out of the process of information processing. Indeed, that what we call "expertise" is actually structural, and exploitable by algorithms, is precisely the idea driving the mega-search company Google. We'll get to Google later.
So to conclude these thoughts for now, what's driving the negative conclusions of Lanier-Carr (to put their conclusion memorably: "the Web is making us stupid") is our participation in an information processing model that is more suited for computers than for people. As this is becoming our cognitive-social model, of course we're getting stupider, to the extent in fact that computation or information processing is not a complete account of human cognitive-social practices. This point is why someone like Lanier--a computer scientist at Berkeley--can ask "Can you imagine an Einstein doing any interesting thinking in this [Web] environment?" He's point out, simply, that innovation or true creativity or let's say "deep" things like what Einstein did have little in common with much of what passes for "thinking" on the Web today. It's not just that lots of people are online and many people aren't Einsteins; it's that lots of people are online and they're all doing something shallow with their heads without even realizing it. As Carr puts it so well in The Shallows, they're surfing instead of digging into ideas; skimming longish text for "bullet points", jumping from titillating idea to idea without ever engaging anything. And, echoing Heidegger again, as the Web isn't simply an instrument we're using, but it's in fact changing us, the question before us is whether the change is really good, and whether the cognitive-social model we're embracing is really helpful.
All the way back to the beginning of this, then, I want to suggest that far from steering us away from the Web (though this simple idea actually has legs, too, I think), what's really suggestive is how to encourage the knowledge-effort principle in the sorts of technologies we design, implement, and deploy online. I use Yelp, for instance. I use it because someone who actually visits a restaurant is a real-world "expert" for purposes of me choosing to spend an hour there. It all lines up for an online experience, in this case. They did the work, got the knowledge, and even if they're no Einstein, they're an expert about that place in the physical world (that cafe in San Francisco, with the great Espresso).
And likewise with other successes. Wikipedia doesn't "work" relative to a traditional encyclopedia like Britannica because the "hive mind" pieced together little bits of mindless factoids about quantum theory, arriving at a decent exposition of Heisenberg's Uncertainty Principle (magic!). It works because of all those little busy bees online, one of them had actual knowledge of physics (or was journalistic enough to properly translate the knowledge about physics from someone who did).
But again, the problem here is that the Web isn't really set up to capture this--in fact much of the Web implicitly squelches (or hides) real-world categories like knowledge and effort in favor of algorithms and processing. When Google shows you the top stories for your keywords "health care crisis", you get a virtual editorial page constructed from the Google algorithm. And when you key in "debt crisis" instead (you're all about crises this morning, turns out), you get another virtual editorial page, with different Web sites. Everything is shallow and virtual, constructed with computation on the fly, and gone the moment you move to the next. You're doomed, eventually, to start browsing and scanning and acting like an information processor with no deeper thoughts yourself. So it's a hard problem to get "effort" and "knowledge" actually built into the technology model of the Web. It takes a revolution, in other words. And this starts with search.
Search is the Alpha and Omega
Deconstructing the Web
(1) The Web paradox is something like: once you start treating people like information processing systems--and I'll explain how this works with the cognitive-social model on the Web--"deeper" and core creative intellectual acts lie outside your scope. So the paradox is that all the information at your finger tips leads, in the end, to having less knowledge. It's sort of like a law of human thinking, some comparison at least metaphorically to a law of thermodynamics, where you can't get something for free. You want lots of information? You have the Web. You want, as Carr puts it, concentration and contemplation? You have to get off of the Web.
(2) None of this really matters--even if you accept the thesis here--if you have an instrumentalist view of technology; you won't see the danger or the problem. But part of my argument is that there is no such thing as instrumentalism; the Web is paradigmatically non-instrumentalist. In fact, you can go "realist" about the non-instrumentalism of the Web and point to actual brain science: our brains are literally changing. So it's not a philosophical debate. It's true.
(3) Getting all the positives of endless information without succumbing to the underlying cognitive-social information processing model is the Big Question. There are two ways to approach this.
(a) Introduce a distinction between Web use and "full" or "natural" human thought and action. A good example here is the distinction between using a network to discover a physical book (say, on Amazon), and actually reading and absorbing what the book says (say, by buying it and then reading it in the physical world).
(b) Change the Web. This is an intriguing possibility, and I think there are a number of promising routes here. Most of the thoughts I have on this matter involve a principle I "noticed" a few years ago on expertise. Call it the "natural world" principle or I'll think of a better title, but here are some examples to motivate it:
(1) Someone writes a blog about driving Highway 101, which he does every summer.
(2) Someone writes a review on Yelp about the French cafe in the Mission District in San Francisco, and the reviewer spent the afternoon at the cafe just last week.
(3) Someone writes an article on Heisenberg's Uncertainty Principle or Sartre's Being and Nothingness on Wikipedia, and the person has a degree in mathematics or physics or just took a course on French Existentialists at the University of Kentucky (or wherever).
Revolution Cometh
In all of these examples, there's a principle of knowledge at work, and underlying this principle, there's one of, say, effort. Someone did some actual work in every example. For instance, the fellow with the travel blog actually drove the highway (it's long, it takes time). Or, the customer at the cafe actually went there, and sat down, and ordered an Espresso and a Croissant. The effort principle underlies the knowledge principle because, well, it takes effort to know things about the world. And whenever people know things about the world and translate this knowledge into bits of information online, like with all communication we can learn (if not experientially, at least cognitively) from those bits, by reading them. In this guise nothing is really that different than fifty years ago; it's like looking at Microfiche, say. Doing research. Learning.
But the effort principle is inextricably tied to the knowledge principle, and this is where this model departs from the current Web model. For instance, something like "Web 2.0", or what Lanier pejoratively calls the "hive mind", pulls the effort and knowledge principles apart. Here, a bunch of anonymous "Web resources" (people online) all chip in little bits of effort to make a finished product. Like, say, a Wikipedia entry. The big fallacy here is that there's something from nothing--no one ever really knows a ton about quantum mechanics, or atheistic existentialism. The focus here is not on what an individual might know (an "expert") but rather on what many anonymous non-experts might collectively "know." And this is where all the trouble starts; for the information processing model that gives rise to the negative conclusions of a Carr or a Lanier (or a New Yorker article about Facebook) is ideally suited to the cognitive-social model that ignores physical-world-expertise and the effort it takes in favor of anonymous Web resources. If information is processed, hive-like, by so many resources, then--like any information processing device--the process is what ultimately matters, not the knowledge from experts. Expertise emerges, somehow, out of the process of information processing. Indeed, that what we call "expertise" is actually structural, and exploitable by algorithms, is precisely the idea driving the mega-search company Google. We'll get to Google later.
So to conclude these thoughts for now, what's driving the negative conclusions of Lanier-Carr (to put their conclusion memorably: "the Web is making us stupid") is our participation in an information processing model that is more suited for computers than for people. As this is becoming our cognitive-social model, of course we're getting stupider, to the extent in fact that computation or information processing is not a complete account of human cognitive-social practices. This point is why someone like Lanier--a computer scientist at Berkeley--can ask "Can you imagine an Einstein doing any interesting thinking in this [Web] environment?" He's point out, simply, that innovation or true creativity or let's say "deep" things like what Einstein did have little in common with much of what passes for "thinking" on the Web today. It's not just that lots of people are online and many people aren't Einsteins; it's that lots of people are online and they're all doing something shallow with their heads without even realizing it. As Carr puts it so well in The Shallows, they're surfing instead of digging into ideas; skimming longish text for "bullet points", jumping from titillating idea to idea without ever engaging anything. And, echoing Heidegger again, as the Web isn't simply an instrument we're using, but it's in fact changing us, the question before us is whether the change is really good, and whether the cognitive-social model we're embracing is really helpful.
All the way back to the beginning of this, then, I want to suggest that far from steering us away from the Web (though this simple idea actually has legs, too, I think), what's really suggestive is how to encourage the knowledge-effort principle in the sorts of technologies we design, implement, and deploy online. I use Yelp, for instance. I use it because someone who actually visits a restaurant is a real-world "expert" for purposes of me choosing to spend an hour there. It all lines up for an online experience, in this case. They did the work, got the knowledge, and even if they're no Einstein, they're an expert about that place in the physical world (that cafe in San Francisco, with the great Espresso).
And likewise with other successes. Wikipedia doesn't "work" relative to a traditional encyclopedia like Britannica because the "hive mind" pieced together little bits of mindless factoids about quantum theory, arriving at a decent exposition of Heisenberg's Uncertainty Principle (magic!). It works because of all those little busy bees online, one of them had actual knowledge of physics (or was journalistic enough to properly translate the knowledge about physics from someone who did).
But again, the problem here is that the Web isn't really set up to capture this--in fact much of the Web implicitly squelches (or hides) real-world categories like knowledge and effort in favor of algorithms and processing. When Google shows you the top stories for your keywords "health care crisis", you get a virtual editorial page constructed from the Google algorithm. And when you key in "debt crisis" instead (you're all about crises this morning, turns out), you get another virtual editorial page, with different Web sites. Everything is shallow and virtual, constructed with computation on the fly, and gone the moment you move to the next. You're doomed, eventually, to start browsing and scanning and acting like an information processor with no deeper thoughts yourself. So it's a hard problem to get "effort" and "knowledge" actually built into the technology model of the Web. It takes a revolution, in other words. And this starts with search.
Search is the Alpha and Omega
Tuesday, December 17, 2013
Help! The Web is Making Me Stupid (and I like it)
Nicholas Carr wrote a book in 2012 about how the Web threatens (yes "threatens", not "enhances") cognitive capabilities like concentration and learning. His book, appropriately titled The Shallows, started out as an article that appeared in the Atlantic in 2008, appropriately titled Is Google Making Us Stupid? In that article--and subsequently and in more depth in The Shallows--Carr suggested that the Web is "chipping away [our] capacity for concentration and contemplation." [Reader: "What's this about the Web? Oh no! Wait, a text. Who's Facebooking me? Check out this video! Wait, what's this about the Web? Who's making us stupid??? Lol."] Yes, maybe Carr has a point.
And he's not alone in sounding an increasingly vocal alarm about the potential downside of all this immersion in modern online technology--the Web. After his provocative Atlantic article, a spate of other books and articles (many of them published, ironically, on the Web) started appearing: the seminal You Are Not a Gadget in 2010 by computer scientist Jaron Lanier, and missives on the dangers of social networking, like the Is Facebook Making Us Lonely? a couple of years later, in 2012, (again in The Atlantic) or the New Yorker's How Facebook Makes Us Unhappy earlier this year.
And the trend continues. Witness the Atlantic Cities latest warning shot about the explosion of online digital photographing, How Instagram Alters Your Memory. Peruse this latest (remember--if only you can--that you won't read it that deeply) and you'll discover that as we're running around capturing ubiquitous snapshots of our lives--from the banal to the, well, less banal--we're offloading our memory and our natural immersion in natural environments to our digital devices. Study after study indeed confirms a real (and generally negative) link between cognitive functioning and use of Web technologies. And yet, we're all online, with no end in site. What gives?
We can ask the "what gives?" question in a slightly different way, or rather we can break it into a few parts to get a handle on all this (somewhat ironically) surface discussion of the Web and us. To whit:
(a) Assuming all these articles--and the scientific studies they cite--are on to something, what makes the "Web" translate into a shallow "Human" experience? What is about modern digital technology that generates such an impovishered cognitive-social climate for us?
As a corrolary to (a), we might ask the slightly self-referential or Escher-like question about why the "Web" just seems so darned opposite to most of us: why does it seem to enhance our "smarts" and our abilities from doing research based on Web searches to capturing moments with digital photography for Instagram. Why, in other words, are we in the semi-delusional state of thinking we're increasing our powers overall, when science tells us that the situation is much different? While we seem to gain access to information and "reach" with Web use, we appear to be losing "richness"--capacities that are traditionally associated with deep thinking and learning? (Capacities, in other words, that we would seem to require, more so today than perhaps ever.)
(b) Swallowing the hard facts from (a), what are we to do about it? At least two scenarios come to mind: (1) "Do" less technology. Go Amish, in other words. Or failing that, read an actual book from time to time. Couldn't hurt, right?
(2) Change technology or our relationship to technology itself. This is an intriguing possibility, for a number of reasons. One, as no less than the philosopher Heidegger once commented (in typical quasi-cryptic fashion), viewing any technology as merely instrumental is the paragon of naivete. We make technology, then it goes about re-making us, as [] once remarked. The words are more true today than ever. And so, if we're stuck with technology, and it's true that the affects of technology on us is ineliminable (there is no true instrumentalism), then it follows that our salvation as it were must lie in some changes to technology itself. This scenario might range from tinkering to revolution; it all depends on our innovativeness, our sense of a real and felt need for change, and of course our ability to concentrate on the problem long enough to propose and implement some solutions (please, Google, don't make us stupid so quickly that we can't solve the problem of Google making us stupid...).
In what follows, then, I'm going to take a look at (a) in a bit more detail. The aim here will be to convince the reader beyond any reasonable doubt that there really is a problem, and that we're headed in the wrong direction, appearances to the contrary (perhaps). And secondly I'll be arguing that there's something like a creative and forward-looking at least partial solution to (b); namely, that once we understand the cognitive-social model we're implicitly adopting when (over) using the Web, we can re-design parts of the Web itself in ways that help mitigate or even reverse the damage we're doing, and in the process (and with a little serendipity) we might also help accelerate or usher in a tech revolution. It's exciting stuff, in other words, so I hope we can all concentrate long enough to... (apologies, apologies).
On (a) - What's up with that?
And he's not alone in sounding an increasingly vocal alarm about the potential downside of all this immersion in modern online technology--the Web. After his provocative Atlantic article, a spate of other books and articles (many of them published, ironically, on the Web) started appearing: the seminal You Are Not a Gadget in 2010 by computer scientist Jaron Lanier, and missives on the dangers of social networking, like the Is Facebook Making Us Lonely? a couple of years later, in 2012, (again in The Atlantic) or the New Yorker's How Facebook Makes Us Unhappy earlier this year.
And the trend continues. Witness the Atlantic Cities latest warning shot about the explosion of online digital photographing, How Instagram Alters Your Memory. Peruse this latest (remember--if only you can--that you won't read it that deeply) and you'll discover that as we're running around capturing ubiquitous snapshots of our lives--from the banal to the, well, less banal--we're offloading our memory and our natural immersion in natural environments to our digital devices. Study after study indeed confirms a real (and generally negative) link between cognitive functioning and use of Web technologies. And yet, we're all online, with no end in site. What gives?
We can ask the "what gives?" question in a slightly different way, or rather we can break it into a few parts to get a handle on all this (somewhat ironically) surface discussion of the Web and us. To whit:
(a) Assuming all these articles--and the scientific studies they cite--are on to something, what makes the "Web" translate into a shallow "Human" experience? What is about modern digital technology that generates such an impovishered cognitive-social climate for us?
As a corrolary to (a), we might ask the slightly self-referential or Escher-like question about why the "Web" just seems so darned opposite to most of us: why does it seem to enhance our "smarts" and our abilities from doing research based on Web searches to capturing moments with digital photography for Instagram. Why, in other words, are we in the semi-delusional state of thinking we're increasing our powers overall, when science tells us that the situation is much different? While we seem to gain access to information and "reach" with Web use, we appear to be losing "richness"--capacities that are traditionally associated with deep thinking and learning? (Capacities, in other words, that we would seem to require, more so today than perhaps ever.)
(b) Swallowing the hard facts from (a), what are we to do about it? At least two scenarios come to mind: (1) "Do" less technology. Go Amish, in other words. Or failing that, read an actual book from time to time. Couldn't hurt, right?
(2) Change technology or our relationship to technology itself. This is an intriguing possibility, for a number of reasons. One, as no less than the philosopher Heidegger once commented (in typical quasi-cryptic fashion), viewing any technology as merely instrumental is the paragon of naivete. We make technology, then it goes about re-making us, as [] once remarked. The words are more true today than ever. And so, if we're stuck with technology, and it's true that the affects of technology on us is ineliminable (there is no true instrumentalism), then it follows that our salvation as it were must lie in some changes to technology itself. This scenario might range from tinkering to revolution; it all depends on our innovativeness, our sense of a real and felt need for change, and of course our ability to concentrate on the problem long enough to propose and implement some solutions (please, Google, don't make us stupid so quickly that we can't solve the problem of Google making us stupid...).
In what follows, then, I'm going to take a look at (a) in a bit more detail. The aim here will be to convince the reader beyond any reasonable doubt that there really is a problem, and that we're headed in the wrong direction, appearances to the contrary (perhaps). And secondly I'll be arguing that there's something like a creative and forward-looking at least partial solution to (b); namely, that once we understand the cognitive-social model we're implicitly adopting when (over) using the Web, we can re-design parts of the Web itself in ways that help mitigate or even reverse the damage we're doing, and in the process (and with a little serendipity) we might also help accelerate or usher in a tech revolution. It's exciting stuff, in other words, so I hope we can all concentrate long enough to... (apologies, apologies).
On (a) - What's up with that?
Thursday, November 21, 2013
Was: Email Is: Existentialism is a Blog Post
Yeah it's interesting because a novel like The Stranger, like so much of existentialism, is actually a commentary about the loss of God (or Christianity). But everyone is so secular these days that we find it hard to see the problem (and so we sort of misread the points Camus, Sartre, Kierkegaard et al were making).
All this started with Nietzsche, who was among the first of the "great" thinkers of the 19th century to see that the consolations of Catholicism--Christendom--were sweeped away, gone. The entire foundation of the Western world was religion, and then in a short span of a hundred years or so, it wasn't. It was replaced of course with science, but again, Nietzsche saw that "science" was not really a meaningful replacement for religion. When he said "God is dead" he was being prophetic--he was saying "you people do not even understand the sea changes that are about to sweep through the Western world."
Existentialism was the philosophical response to nihilism; Kierkegaard was a Christian but thought it was absurd to be a "believer" and required a subjective, transformative experience that filled one with anxiety and dread (the "leap of faith",through darkness, into light, as it were). Sartre was an atheist and coined the phrase "existence precedes essence." This is, again, is a profoundly religious-inspired statement: we once got our "essences" from the religious world--the notion of a soul, a benevolent Creator, and a universe that had meaning for us personally. Suddenly there are no essences, as there is no longer "God" to give them to us. So in a void where nothing can mean anything, isn't existence (that is, without essences) terrifying and pointless? Who shall we be? And how? Sartre's answer is that we "create" our essences (his dictum means: we exist first, then we choose our essence). This all sounds warm and fuzzy today, but I think most of us don't really think through what he's saying. The freedom we gain once severed from our religious essences is, according to Sartre, a "radical" freedom. It's a gut-wrenching realization that every choice you make creates you (whereas you once had a "blueprint" to work with, so to speak). He would not understand (or certainly not agree with) our blue-sky attitudes about our existence. I think it's funny to reflect on existentialism's message in our modern techno-science world. It's sort of like: who has the time for all this fear and trembling? Huh? Like Huxley warned us in his Brave New World (an almost perfect commentary on scientific distopia), we can alienate ourselves from ourselves with distractions-- iPhones, money, Facebook, on and on. The big questions don't go away so much as they never can quite come up, busy as we are (and doing, really, what?). Existentialists would say we're guilty of a false consciousness (or in Sartre's words, a "bad faith"). I get it, but you know, existential angst has it's limits. :)
Anyway Camus makes this point well in The Stranger, and to my point above, note that it is the priest in prison that brings forth the main character's rage. Not accidental.
Wednesday, November 13, 2013
Descarte's Cake (having it and eating it)
I'm reading Richard Rorty's Philosophy and the Mirror of Nature. Before this I was reading something else, then something else, then...
I'm generally a fan of Rorty but here's my take on the whole POC debate and why it never seems to go anywhere. All the analysis that philosophers of mind have done in the last few decades is basically accurate. Yes, it's suspicious to talk about mental states as non-extended in the Cartesian sense, or to talk about them with nouns rather than adjectives, or even to be dualist about them. (Sure. Yep. Yeah. Got it.)
I buy the analysis Rorty gives in Mirror, that Descartes lumped reasoning-about-universals together with sensation (today: qualia) to make a distinction between extended stuff (for Newtonian mechanics, with primary qualities that are mathematically describable) and non-extended stuff, for all the personhood notions we want to protect. I accept that Descartes thus gave us the modern mind-body problem, and that this problem didn't really exist to the classical mind. For example, Aristotle would have a hard time understanding Descartes' notion of "mind", as he thought that sensation was part of the body and he had a participatory rather than representational view of knowing. (And hence, modern philosophy with it's representational framework is obsessed with epistemology after Descartes, and this is in a real sense an historic accident due to his idiosyncratic treatment of mind-body issues, a treatment that was entirely novel and foreign to philosophers of the time.)
The problem is that the Cartesian mind-body idea (extended versus non-extended) gives us, also, the modern view of a material universe: just that "stuff" which is not-mind and has only those properties that are describable by mechanics (mathematics). This idea is idiosyncratic and fully a product of Descartes' error as well; you can't have it both ways. Just as mind is almost certainly not the "ghost in the machine" idea that we inherited from Descartes, so too "matter" is almost certainly not only the just-so "stuff" that we can explain and predict using our differential equations and geometry. (I like Newton too, but this is really quite a tip of the hat, to cede to him all of reality.) So the real mind-body problem is the problem of having one's cake and eating it too. This is the situation the analytic philosophers found themselves in post-scientific revolution, and while accepting the Cartesian division where it suited them (as defenders of a "new" and "scientific" materialism), they've rejected the mind where it doesn't. I give you: our current age. (Or: our patchwork of almost certainly wrong ideas.)
So this is silly. I'm always amazed at how smart people get things wrong. I think there's some kind of smart-person bug or disease, a kind of moral courage that they lack sometimes (Was: cozy up to religion. Now: cozy up to "science"--in scare quotes because we still picture empirical science as exploring the parts of a machine, though this idea is clearly wrong today. Another puzzle.). With all those smart analytic philosopher-scientists-wanna-be's, we're certain to get things all wrong-o.
So, modern reader, I'm with you. I'd be happy to throw out Descartes. The way I see things I can't figure out which is this French genius's sillier idea: that all of nature should correspond just-so to our differential equations (though it, of course, turned out otherwise), or that all of mind should correspond just-so to what's left over (so to speak).
In other words, there isn't really any such thing as "matter" in the Cartesian sense (stripped of everything we can't measure). Why should there be? Once you see this side of things (or this horn of the dilemma), you don't waste so much time writing diatribes about Cartesian mind (those dualists, the idiots!), because you realize it throws the same net over the materialistic notions you want to preserve. Stuff-open-to-empirical-investigation has all sorts of properties that would have flummoxed Newton (and Descartes). "Nature" (rather than Cartesian "matter") has all sorts of interesting properties. One of them seems to be some aspects of mind. Spontaneity. And quite obviously, sensation. Right? To put things another way, what sort of a universe do we really live in?
Sunday, November 10, 2013
On Scaggling and Jaggling
On the issue of language, I might say to a friend down in California where some of my books are stored, "send me up some non-fiction books", to which my friend will ask "which ones" to which me not knowing specific titles will request a list. I might say something seemingly absurd, like:
"Look, you scaggle up a list, and I'll jaggle out the ones I'm thinking about."
What does this mean? Not to go all Wittgenstein on it, but it seems like a silly language game, and it's hard to see what the shared context is, so it seems like a risky imprecision, or in other words a bad language game. Not only are there no real referents for the actions of "scaggling" and "jaggling", but only a excitable poet or someone seemingly insensitive to a host of issues in the use of language would express things this way.
Wrong-o. For, a scaggling is a compact and precise wording for my friend. It tells him to get a list together, but not to worry about it too much (in a philosophically imprecise but practically effective way), and this because on the other side of things, he knows I'm only jaggling. To put things slightly differently, the intended meaning of scaggling is at least partially given by the meaning of jaggling. One is tempted to say, "if I be only jaggling, you dear Sir, be only scaggling." In this sense then we've got a classic Wittgensteinian language game, or to eschew the name dropping, we've got a couple of verbs that are bi-relational in the sense that both intension and extension or appropriately defined, seemingly ex nihilo. This all, from two verbs which as near as I can tell, don't mean anything at all, in the context of producing a list of book titles for purposes of selecting a subset of them. There aren't any necessary and sufficient conditions, and a fortiori, it doesn't serve to explain, but seemingly makes even more mysterious and obscure, that one meaningless verb is related to another in such a way that the pair is somehow mutually explicated.
Wrong-o. For, a scaggling is a compact and precise wording for my friend. It tells him to get a list together, but not to worry about it too much (in a philosophically imprecise but practically effective way), and this because on the other side of things, he knows I'm only jaggling. To put things slightly differently, the intended meaning of scaggling is at least partially given by the meaning of jaggling. One is tempted to say, "if I be only jaggling, you dear Sir, be only scaggling." In this sense then we've got a classic Wittgensteinian language game, or to eschew the name dropping, we've got a couple of verbs that are bi-relational in the sense that both intension and extension or appropriately defined, seemingly ex nihilo. This all, from two verbs which as near as I can tell, don't mean anything at all, in the context of producing a list of book titles for purposes of selecting a subset of them. There aren't any necessary and sufficient conditions, and a fortiori, it doesn't serve to explain, but seemingly makes even more mysterious and obscure, that one meaningless verb is related to another in such a way that the pair is somehow mutually explicated.
What are we to make of this? On the charge of imprecision, the rejoinder (as I've just outlined) is that however mysterious the success, nonetheless there it is. And hence from the grossest of imprecision, we get virtual precision--just that which I wished to say, I in fact have said, and no better proof is that I'll get the list, then the titles from the list, then the books, all with no one performing unnecessary work in the intended context.
So language is curious. I'm tempted to add here that, if language is this powerful, and in such a way that seems perverse to formal language analysis, then we should be hopeful that something like the analytic tradition in philosophy can be turned on its head, and made to succeed by not getting rid of a bunch of artificial problems in language, but rather by getting rid of itself, using its own methods (so to speak).
Now I'll turn to another issue, which is the issue of scientific statements. If I start scaggling and jaggling about, say, a chaotic system, I'll get myself into trouble. A chaotic system is just that system which has properties like dense periodic orbits, and something about properties of a topology (here I forget), and sensitive dependence on initial conditions. Every word means exactly what it has to mean in order that a set of mathematical statements can be produced to describe it. A nondeterministic partial differential equation like the Navier-Stokes equation will need to be summoned up out of a bag of differential equation techniques describing dynamic systems, for instance, in order to get somewhere with chaos description. You can point to a turbulent system, sure, but to describe and partial-predict a chaos system you need to get reference right, which means you need "dense" not to mean "stupid" but rather a specific propagation through a phase-space with periodic orbits.
Hence, one is tempted to say in respect to language about physical systems, that there is no corresponding statement to the effect that "If you be a scagglin', then I be a jagglin'." One can't, for instance, simply say "If we be scagglin' a Navier-Stokes equation to a problem in fluid dynamics, then we be a jagglin' some chaos", or rather, one could do this, but unlike in the book scenario no additional theoretical or practical work is performed by my linguistic act. (Potentially, I'm not taken seriously by my colleagues as well. One could imagine getting escorted out of a building, too.)
I'll make one final point here, which is that the notions of "precision" and "non-vagueness" are themselves seemingly imprecise and vague, or at least contextual in the Wittgenstein sense. (I'm tempted to add here, too, that this is a very big deal.) On my first example, with apparently vague locutions ("scaggling", "jaggling") we get exactly the intended result, and this too with a conservation of language (how simple and elegant that two verbs should be bi-definitional, while neither really has a definition in the context (which would, alas, simply be more words), and that each is adequately defined by the other by simple assertion). In contrast, from the most specific language we can formulate (namely, that of modern mathematics), the vaguest and most impossibly non-predictive results seem to flow, as in with the description of a chaotic system, where most of the "meaning" of the system is given precisely by its inability to be so rendered comprehensible or predictable or precise. It should be obvious then that there's no necessary connection between precise language and precise results; or, that the goal of making our language "more precise" by making it more mathematical or specific does not entail much about its referents (if by "entail" we mean that the precision from the expression transfers to the referent somehow, "cleaning it up." This is a simple and very silly notion).
What I'm saying is that, to nature, the chaotic system may simply be scaggling and jaggling along.
Subscribe to:
Posts (Atom)