The esteemed editor of this magazine was not at all persuaded by my discussion of Twitter in the first installment of this new column (“Weiners and Losers,” September). I would have been more than a bit disappointed if it had been otherwise. Though I have been using Twitter in various ways for over four years now, I remain more than a bit ambivalent about the service. As I noted, I had signed up for Twitter over a year before I began to post to it with any regularity, and it was some months after that before I quit using it almost exclusively as a marketing tool. One of the more distasteful elements of Twitter is the extent to which it is cluttered up by the equivalent of e-mail spambots. Some are entirely automated and represent no real human being; these, at least, are generally easy to spot and to block. Worse yet, in the opinion of many Twitter users, are those people who follow everyone in sight in the hope that other users will follow them back, for no purpose other than to promote a particular website or product. It’s possible for a person’s tweets to be interesting even if he does not interact with other Twitter users; a stream of tweets that is nothing more than p.r. or advertising, however, is rarely worth following (unless it concerns a product on which you rely). To the extent that most people find Twitter enjoyable or even simply useful, it’s because the users they follow are recognizably human—a condition of which p.r. flacks and marketers can rarely be accused.
Of course, “recognizably human” does not necessarily imply “good” or even “tolerable.” Jerks are as recognizably human as saints, even if they are not fully so. And, try as you might, you are never really going to get to know someone simply through Twitter. That is true, in a sense, of all writing, including the articles in this magazine; but the 140-character limit on tweets makes the proposition simply ridiculous.
Which, oddly enough, explains why my wife and I were willing to attend a “tweet-up” at the local brewpub in order to meet the Rockfordians we had encountered on Twitter. Had their tweets been merely promotional or otherwise uninteresting, we would not have bothered. But some showed a certain promise, and even those that were, well, odd were odd in a interesting way. In order to learn more about these people, we had to meet face to face. And good beer and some of the best pizza in Rockford ensured that, if all else went horribly wrong, the evening would not be an entire waste.
Dr. Fleming finds the very idea of such a meeting extremely creepy and chalks the difference in our reactions up to a generational gap. He’s undoubtedly right about the latter. But is meeting, a couple of miles from your home, a group of Twitter users with whom you have been interacting for months any more odd than attending your first John Randolph Club meeting or your first Rockford Institute Summer School, much less gathering with a group of Chronicles readers you do not know (and most of whom do not know one another) in a bar in St. Louis? Perhaps—after all, Chronicles readers have at least one thing in common, and it’s a good one. But then, so do local Twitter users—and I don’t mean Twitter but Rockford.
That, it seems to me, is the main attraction and the greatest strength of social networking, in all of its many different flavors. We speak of Twitter and Facebook and now Google+ (and, in days gone by, of MySpace) as if each is a monolith—a natural reaction, considering that they are run by corporations whose technologies are, at core, under central control. But whereas Twitter is selling Twitter, what most people are buying is not a centralized experience but an individualized one—or, rather, multiple individualized experiences. The people I follow on Twitter, and the people who follow me, are divided into many different social circles, some of which overlap, many of which do not—Macintosh and iOS developers, Catholic apologists, Chronicles readers, Chronicles writers, fellow employees of The Rockford Institute and of About.com, “foodies,” drinkers of craft beer, Rockfordians. The list goes on and on.
Oddly enough, the companies selling social networking have taken a rather long time to understand this. And it shows in the kludges they have put together to try to make it easier for people to use their social networks in the different areas of their lives. Twitter now allows users to create multiple lists of the people they follow, but switching between the lists, both on Twitter.com and in most native computer and mobile applications, is awkward at best. Facebook has added both lists and groups, to which you can apply separate privacy settings that should, in theory, keep your boss from having to read about the latest movie you got from Netflix or what you ate for supper last night, but in practice, unless you set your privacy settings so that no one can see anything you post (thus defeating the purpose of a social network), there’s a good chance that one part of your life will drift into another—perhaps with unwanted consequences. And even if you could somehow figure out how to make these bolted-on features work, you have to go back and divide your contacts into discrete lists—a daunting task, if you have more than a handful of friends or followers.
Google+, the new kid on the block (Google announced the new network after I wrote last month’s column), may illustrate something that the company’s “frenemy,” Apple, has long understood: The winner is not always the first to get to market, but the first to get it right. Though the service is currently invitation-only, people have been flocking to it, and not simply because of the novelty. Indeed, some people who dislike Google’s unstated policy that “All your data are belong to us” have found the company’s latest foray into social networking strangely compelling.
That’s because the assumption that users belong to many different circles lies right at the heart of Google+. At the moment you add a new contact, you are encouraged to place him in a circle (Google+’s telling name for its version of lists) immediately. Whenever you post something, you choose which circles to share it with; none of your other contacts can see it. Google+, of course, encourages you to share it as widely as possible, and the user of the service should always remember that anything you share with even the most restricted of your circles is shared with Google itself.
Now Google didn’t design Google+ simply to address the shortcomings of Twitter or Facebook (the network it most closely resembles). The company is in the business of targeted advertising, and as you share items, make connections, and categorize those connections, Google becomes better able to serve up advertisements, both in Google+ and elsewhere, on which you’re more likely to click.
Does that represent, on a fundamental level, a loss of privacy? Absolutely. Do most people who use Google+, or simply Google, or even the internet at large actually care? Probably not. While Google has not discussed how many users have signed up for Google+, tech websites place the number in the tens of millions, and Paul Allen, the founder of Ancestry.com, projects that it will reach 100 million users within five months.
That astounding rate of growth, even in the face of privacy concerns, reflects a desire among users to have their virtual realities mirror their real lives. In “meatspace,” I don’t share everything with my colleagues that I share with my friends from high school, much less with my wife; why should my online interactions be any different?
Making our social networks reflect our real lives helps take the pain and anxiety out of participating in them. And that brings them one step closer to becoming nearly indispensable technologies, like the postal service, the telephone, and e-mail, rather than the massive time sinks they too often seem to be today.
Leave a Reply