Meta-meta-commentary on Google/Nest

What I see as the key piece of the increasingly shrill public response to the Google acquisition of Nest:

"People invited Nest into their houses. Not Google."

As I suspected, there’s something about the status of the Nest thermostat as a piece of distinctively domestic technology that’s freaking people out. The rhetoric of the complaint is instructive: Nest was “invited” into the home.

This obscures the point that Nest is and has always been a piece of technology pretty much like any other. And I still can’t help but feel that most of this outrage is contrived — or at least, blown totally out of proportion because the Nest is a gizmo that people almost universally liked and Google is an easy target.

But, language aside, the privacy complaints still feel mostly meaningless. If you use a smartphone, or Gmail, or Google Search, or have cookies enabled on your computer, you’re already granting Google access to so much information about your daily habits that the temperature of your house seems like an afterthought. Other than people who object to the entire Google personal-information-harvesting ecosystem (which is a reasonable position, albeit not one I share), I’m not convinced that there’s a whole lot of significant information about the average Nest user’s home life that Google would gain from access to Nest’s data.

What’s more interesting, as John Gruber (and others) points out, is: people are reacting so strongly to a mostly innocuous acquisition because they’re afraid of Google to the point of irrationality. And that ceases to be a technical or privacy-related issue; it’s a public relations one, which is much harder to solve.

Posted 1/18/14 by Yoel Roth. Comments. 0 notes.

Nest, privacy, and the Google reflex

Something about Google’s acquisition of Nest yesterday triggered the internet’s collective privacy gag reflex.

Why are we so upset? What’s the problem with Google making a networked thermostat? My hunch is that we’re especially sensitive to privacy issues around smart thermostats because they’re in our living rooms, constantly gathering data about the supposedly sacrosanct private space of the home. But let’s map out the debate as it’s unfolded so far:

Most of the negative reactions to the $3.2 billion acquisition that I’ve seen — and it’s worth mentioning that I haven’t encountered a single favorable response from anyone, anywhere — fall into two categories:

  1. User experience: Google buys cool stuff and tends to ruin it.
  2. Privacy: There’s something intrinsically bad for privacy about Google getting thermostat data.

The user experience argument is probably the most compelling. No one, myself included, wants a layer of Google+ social cruft layered on top of a stellar product like Nest. I don’t see this happening, mostly because I don’t think product managers at Google are preternaturally stupid. No one can make a compelling case for a social thermostat. The more likely scenario is that Nest will be integrated via API into Google Now in a way that will benefit Android users and be basically irrelevant for everyone else.

The privacy issue is a little more complicated, and the waters are muddied even further by our seeming “Google reflex”:

— Google did a new thing! Privacy is dooooooooomed!

— Why?

— Because, um, Google.

— But, why?

— Cookies, probably.

I’m not arguing that Google is always right, or that privacy is inherently worthless. But there’s nothing to be gained, analytically or socially, by assuming that anything Google does necessarily makes the world a worse place to live in.

The real question is: What’s Google’s endgame in buying Nest?

In crying foul over privacy, most people are assuming that Google wants to own Nest because there’s some useful data a thermostat could contribute to the AdSense empire. But, in reality, Nest doesn’t know very much about you, the consumer, that an interested advertiser couldn’t figure out in a different way. For example, Nest learns when you’re at home in order to more efficiently heat and cool your house; cue alarm bells for “Google knows when you’re sleeping; Google knows when you’re awake.” But Google could already figure that out from my Gmail usage patterns, or when I’m idle on Google Talk, or when I tend to search for things. The only novel data Google gets from Nest is about heating and cooling. An even more farfetched use of this data might involve extrapolating things about the construction of a home from its heating and cooling curves. But in either case, the absolute worst-case scenario is that Google can target ads to you for a more efficient air conditioner or a contractor to install more insulation. This doesn’t seem like an especially big deal to me.

(And, of course, this all assumes that Nest’s privacy policy changes to even allow Google access to their data — something that hasn’t happened yet.)

The more plausible reason for the Nest acquisition is Google’s long-standing interest in energy — from cooling technologies to hydroelectric power. Google has lots of reasons to be interested in a startup that makes HVAC more efficient; they’re called datacenters. Again, this seems pretty benign.

The longer-run question, and the one that got lost in the knee-jerk privacy hysteria around the Nest acquisition is: Do we want Google to become an infrastructure company? It’s been moving in that direction with services like Google Fiber, and Nest lays the groundwork for a bigger expansion into the energy industry.

For my part, I’m not especially worried. There’s no reason to believe that a hypothetical Google Energy utility would be any more or less evil than the existing players in the field. If anything, we might see some concrete progress towards a smarter electric grid. Of course, there’s room for debate about the merits of a smart grid, too; but the essential merits of those debates are undermined by misdirecting our anxieties onto the nebulous categories of “Google” and “privacy.”

Posted 1/14/14 by Yoel Roth. Comments. 2 notes.

Remembering the conservatism of Steve Jobs

A year ago today, Steve Jobs died.

I remember the sudden outpouring of grief. I remember the Post-It note tributes on the glass of the Walnut Street Apple Store in Philadelphia — snowballing from one to ten to a multitude. I remember seeing Walter Isaacson’s book in the hands of, seemingly, everyone. I remember the consternation of a million tech bloggers simultaneously lamenting the demise of Apple as we knew it. I remember talking to my therapist about whether, as a former Apple employee and a passionate Apple consumer, it was strange that I wasn’t crying, or really feeling much of anything at all (though I’ll admit to tearing up when I read Brian Lam’s "Steve Jobs was a kind man: My regrets about burning him" in The Atlantic).

In the following year, I bought an iPhone 4S, then a new iPad, then a MacBook Pro with Retina Display, then an iPhone 5. I spent just shy of $5000 on Apple products, even as, with each release, I read the collective yawn of technology journalists fed up with what they perceived as incremental improvements. I saw AAPL cross the 700 mark for the first time.

Today’s tribute video on the Apple homepage highlighted products: the iMac (“The whole thing is translucent!”), the iPod (“…and it goes right in my pocket”), and the iPhone (“Are you getting it?”). But, in Tim Cook’s letter, the message was a little different. Apple’s DNA, as Steve liked to describe it, is a question of corporate culture: “No company has ever inspired such creativity or set such high standards for itself.”

We’ve been hearing the technology + liberal arts line a lot from Apple lately. But, actually, I think that standards are the issue at the core of the company. It’s why the MobileMe and Maps fiascos are so deeply embarrassing. It’s why I’m infuriated by Phil Schiller’s asinine response to “Scuffgate”: that scratches on the bezels of black iPhone 5s out of the box are “normal.” It’s why, in my four years as a Mac Genius, I routinely ignored AppleCare’s guidelines about stuck and dead pixels in LCD panels. A millimeter scuff mark or one bad pixel out of 1,296,000 in a display is mathematically insignificant; but, on a human scale, it’s annoying as hell. It’s a violation of the standards Apple’s customers have come to expect — and, more importantly, the standards Apple enforces for itself. Those standards are what Steve Jobs embedded in Apple’s culture: from his over-engineering of the Macintosh assembly line to the savage design process Walter Isaacson chronicles so well in his biography.

Speaking of Isaacson: I resisted reading his book for about six months after Steve’s death. But, when I finally picked it up, I found it to be a remarkably artful weaving of the story of Steve Jobs (as an asshole; as a visionary; as a father and husband; as a perfectionist) with the story of Apple writ large. Apple’s products were a vehicle for Isaacson to tell the story of Steve, in a slightly megalomaniacal way that, I suspect, Jobs would approve of.

Absent from Isaacson’s thousand-page tome was a treatment of one of the earliest statements of Apple’s DNA: the Human Interface Guidelines first written by Bruce Tognazzini (employee #66) in 1978. The HIG are, at their core, an articulation of what makes Apple products feel Apple-like. Some of this has to do with the mechanical things that emerge from hours of user experience studies: things like the size of touch targets that are easy for people to interact with (44x44 pixels). But a bigger part is philosophical. The HIG for Mac and iOS today read like the source texts that Apple product keynotes are cribbed from: “People use computers to create and experience the content they care about.” “The display encourages people to forget about the device and to focus on their content or task.” These are the same lines we’ve been hearing in the iPad, Mac, and iPhone announcements for the past few years, with slightly different language. And this is developer documentation, not ad copy.

The Human Interface Guidelines represent the belief that Apple has figured out how to make, as Tim Cook put it in his letter today, “products that our customers love.” It’s why Apple goes to such lengths to tell developers how to make better applications. It’s why Apple puts out a 48-page document detailing the minimum amount of blank space that should be used around its logo in print. (It should be “equal to the height of the Apple logo, measured from dimple to dimple.”)

This isn’t just about Steve Jobs’s neurotic attention to detail; it’s about believing that Apple has found the answer to the product design problem. And this is a deeply conservative position. I don’t mean “conservative” in the gay-hating, small-government, Michele Bachmann sense — but in the older, Aristotelian meaning of the word. There is an objectively, universally right way to do things. And Apple’s products are about the pursuit and attainment of this kind of perfection, as applied to computing.

We’ve been misled, in a way, by years of rehashing the “1984” and “Think Different” ads. Certainly, there’s something fun and revolutionary about Apple’s self-presentation. But nothing — nothing — about the Macintosh represented a round peg in a square hole. Using a Macintosh is supposed to feel like putting on a well-worn pair of jeans: effortless, comfortable, familiar. “New” isn’t a part of the equation, except insofar as new products sometimes represent iteratively better solutions to the problems of design.

This might be why we’ve been so bored by Apple’s product releases lately. There’s nothing dramatically different about the iPhone 5, as compared to the iPhone 4S — it’s just better. And that better-ness isn’t the accidental result of throwing shit at a wall and seeing what sticks: it’s the product of careful engineering refinements. Is this boring for tech journalists? Sure. But the result is a lineup of products that are each, without question, best-in-class. That’s a concept that’s hard to sell to consumers in the abstract, but anyone who holds an iPhone 5 in their hands gets it intuitively. This phone is the best, full stop.

More than anything else, the relentless pursuit of human-scale perfection through engineering is the legacy of Steve Jobs at Apple. It’s something that, as best I can see, Tim Cook has managed to maintain. The iPhone 5 is exactly what I wanted it to be, and I have no doubt that, if Steve Jobs had been on stage on September 12, I’d still be holding the same device in my hands. We’ve entered an age of boring devices. But nowhere is the new normal of consumer electronics embodied more perfectly than at Apple. That’s the life work of Steve Jobs. He’ll be missed, but the paradigm of design he helped create ensures he won’t be forgotten.

Posted 10/05/12 by Yoel Roth. Comments. 6 notes.

nature1188, leonlovesddr, and other legacies of my digital past

A few weeks ago, I made it a project one evening to log back into my long-dormant ICQ account. It isn’t because I feel like ICQ has anything in particular to offer to my instant messaging experience — I’ve stopped using the service ICQ turned into, AOL Instant Messenger — but instead as an exercise in what parts, if any, of my early digital identity I could actually regain access to in 2012.

ICQ turned out to be the easiest service of the night. A quick search for my first and last name in ICQ’s still-operational people directory turned up four results. Because ICQ UINs were issued sequentially, the lowest number was my earliest account: 1807000, which, apparently, is a hugely desirable number. I’d forgotten, of course, that lucrative 6- or 7-digit ICQ numbers used to be a huge industry in the heyday of pointless shit people paid for on the internet. (In retrospect, I’m surprised that my enterprising elementary school self didn’t try to sell my number back when I could have found someone stupid enough to pay for it.)

I’d also forgotten my password. The ICQ password reset process required me to log back into one of my early e-mail accounts, on Hotmail, which presented yet another hurdle because, of course, I’d also forgotten that password. Fortunately, my personality hasn’t changed very much since middle school, because my intuitive answers to my challenge-and-response security questions still worked (Q: “What is your favorite TV show that is no longer on the air?” A: Daria.) Along with 10,000 or so spam messages and a small handful of personal messages I hadn’t deleted (this was back in the days that e-mail storage limits were still relevant), I found the ICQ password reset e-mail and, a few clicks later, had access to an instant messaging service no one I know still uses.

I wasn’t so lucky with my absolute earliest e-mail account. My first AOL screen name, nature1188 (a riff on an elementary school friend’s nature385; evidently, we were both very into nature), still existed, but the password reset process required access to another long-gone address: And therein, the trail ended. Adelphia, South Florida’s first cable broadband provider, was acquired by Comcast after going bankrupt in 2002. And, according to Comcast, I missed the migration deadline for my Adelphia address some time in 2003.

There’s no telling what I might have unearthed if I had logged into my old AOL account. Most likely, as with Hotmail, it would have been a lot of spam, and possibly one or two hints of what I was up to on the internet in elementary school. And this doesn’t even scratch the surface of my various digital presences: old blogs, long forgotten or deleted (including my first LiveJournal account, leonlovesddr, later abbreviated to leon); forums I used to post on; now-defunct news sites like (which apparently closed up shop in 2011) where I used to lie about my age and post under the username Mok; and on and on.

Plenty of people have been preoccupied with these sorts of questions before. GOOD mused about the “eternal shame” of your first online handle, never pausing to wonder if those handles are still accessible or what actually came of them. The New York Times and the Atlantic have both mused about the “problem” of on- and offline death in the digital age, and how to deal with the trail of Facebook profiles and e-mail accounts that persist once we die. One of the biggest issues, both articles agree, is that we’re generating so much digital content that it’s hard to imagine future generations being able to sort through it. As the Times puts it,

Bit-based personal effects are different. Survivors may not be aware of the deceased’s full digital hoard, or they may not have the passwords to access the caches they do know about. They may be uncertain to the point of inaction about how to approach the problem at all. Any given e-mail account, for instance, can include communication as trivial as an “I’m running late” phone call or as thoughtful as a written letter — all jumbled together, by the hundreds or thousands.

The living aren’t exempt from these issues, either. Viktor Mayer-Schönberger, in Delete (2009), writes about the humanistic problems of an information environment in which we’ve forgotten how to forget. Reputation and identity have become indelible, in a way that Mayer-Schönberger suggests they weren’t previously. The archival of blog posts and tweets and Facebook status updates creates a digital trail of personal information that short-circuits the human processes of forgetting and evolving. Maybe that’s true, although Mayer-Schönberger’s solutions — most significantly, an artificial system of information expiration after a period of time — is hardly an adequate response.

But what about all the information we don’t remember or can’t regain access to? As one of the first waves of so-called “digital natives,” most of my life is archived, in some form, online — essentially from birth, in a series of e-mail accounts and social network handles. So is it a problem that I can’t access the first 15 or so years? I don’t flatter myself to think that I will ever have a biographer interested in piecing together the life of Yoel Roth from my digital detritus, but as a point of personal interest, I’m slightly concerned that a significant chunk of my past is totally out of reach. Far from Mayer-Schönberger’s claim that, socially, we can’t forget, I’m confronted with the experience of being utterly unable to remember.

Posted 6/06/12 by Yoel Roth. Comments. 0 notes.

Trusted traveler, redux

Since my last post about the TSA’s Pre-Check program, it seems like most of the internet discovered that elite frequent flyers are being whisked through airport security. The LA Times ran a piece noting that the TSA passed 1 million passengers screened through Pre-Check in the program’s first eight months of operation (of the 1.8 million passengers screened daily at American airports). Even the New York Times chimed in, outing the not-so-secret way to short-circuit the Pre-Check roll-out process — namely, getting a Global Entry membership (like I did).

And, meanwhile, I did a fair amount of flying (to Boston, San Francisco, and, most recently, Fort Lauderdale), both with and without Pre-Check. I’d like to share two travel-related stories, and then point out a possible epistemological contradiction in my wholehearted embrace of Pre-Check as the savior of the modern travel experience.

Case 1: San Francisco to New York, aka “travel for the rest of us”

Flying business class carries some perks; namely, access to the expedited security line that, more than anything else, is faster because it contains people who actually know how to travel like human beings and are less likely than average to have squalling infants and collapsible strollers that require secondary and tertiary screenings. Nonetheless, the second I reached the backscatter x-ray machine in SFO’s Terminal 2, my “priority” experience ground to a complete halt.

I want to pause here and note that I’m by no means a tin-foil-hat Luddite conspiracy theorist. Nevertheless, there’s something I find inherently unsettling about the pornoscanners. Part of my concern is, obviously, for the privacy of my genitals. But, more to the point, I don’t think it’s reasonable to install machines in airports with largely unknown health effects and marginal security benefits and parade millions of travelers through them. So, since the scanners first started showing up, I’ve opted out, which entails standing around trying to look non-threatening while my laptop sits unattended on the other side of the metal detector, waiting for a male TSA employee to become available to feel me up.

Anyway, in SFO, the TSA employee working the priority line seemed more confused than anything else by my request to opt-out. As I stood to her right, watching her wave passengers into the x-ray machine, she looked at me and said, with a puzzled expression (as if she couldn’t believe that anyone would be so foolish as to object to the scanners), “There’s nothing wrong with the machine. No radiation, no nothing!”

Not quite.

A few minutes later, a male employee gave me the requisite pat-down (including touching what a TSA employee in Philadelphia later called my “groinal meat”) and I was on my way. The TSA confirmed, by leaving no part of my body un-patted-down, the same things they already learned from my Pre-Check profile: namely, that I’m not a terrorist. By contrast:

Case 2: New York to San Francisco, now with added stereotyping

The problem with having Pre-Check is that there’s no outward signs of it (other than the demographic properties that, I suspect, make you statistically more likely to be accepted into the Pre-Check program). So, upon presenting my boarding pass to an airport employee at JFK guarding the entrance to the Pre-Check and priority security lines, I was given a silent one-finger point to the security line for the hoi poloi a few yards over. “No, I have Pre-Check,” I insisted. The woman gave me a once-over and asked, “What, you? Pre-Check?” Yes, me, Pre-Check. And, after scanning my boarding pass, she confirmed that I wasn’t merely economy class cattle trying to bust into the walled garden of premium security and waved me through.

No more than 30 seconds later, I was on the other side of the metal detector, still wearing my shoes, picking up my fully-packed carry-on and walking away from the checkpoint.

It’s easy to argue that Pre-Check is the better of these two options; and, from the perspective of a traveler looking to get through security without being groped or having to totally disassemble my suitcase, it is. But there’s a slightly uncomfortable tension between my attitude towards Pre-Check and my intellectual predilections more generally.

When discussing airport security with my mother (who, for the record, is a huge proponent of the Israeli method: namely, racial profiling), I found myself arguing in favor of Pre-Check on the grounds that algorithms mining risk from vast quantities of personal information are better at identifying potential terrorists than even the best-trained humans. Things like my frequent flyer or employment history are reasonable indicators of how much of a security risk I might be, and those aren’t readily apparent to someone working a lengthy security line at an airport.

But isn’t this exactly the kind of logic that makes qualitatively-minded scholars, myself included, scream bloody murder? As Manovich (2011) (PDF) and boyd and Crawford (2011) (PDF) have pointed out (among many others), there are some glaring problems with the assumption that big data is capable of algorithmically deriving analytically useful or accurate representations of human phenomena. Less radically, it’s at the very least fair to say that big data methods rely on different epistemological approaches by writ of deploying a whole new class of data. As Manovich puts it,

Ethnographers and computer scientists have access to different kinds of data. Therefore they are likely to ask different questions, notice different patterns, and arrive at different insights. This does not mean that the new computer-captured “deep surface” of data is less “deep” than the data obtained through long-term personal contact. In terms of the sheer number of “data points” it is likely to be much deeper. However, many of these data points are quite different than the data points available to ethnographers.

And as governments have gotten into the big data game, the waters have become yet murkier. For one, as The Economist puts it, the task of wading through the relevant data sets to assess security risks has become nearly impossible. And the shining beacon of hope for air travel, Pre-Check, relies on a mix of big data (like the pages and pages of personal details I submitted with my Global Entry application) and salient bits of small data, like someone’s elite frequent flyer status or having an American Express Platinum Card. This mix is puzzling for a number of reasons; most significantly, that terrorists aren’t stupid, and that if a screening measure is calibrated around any predictable set of manipulable properties (like frequent flyer status), it becomes relatively easier to circumvent that measure.

In the case of airport security, it’s hard to know what approach is the best. Israeli airports represent, in a way, the most humanistic approach, relying on heuristics and intuition to discern meaningful trends in passenger behavior. (Andrew Sullivan has culled most of the interesting perspectives on this debate here.) Pre-Check nods in the direction of big data, but remains too selectively deployed to really gauge how effective its algorithms are at profiling large swaths of the American population.

Or you could always just take your shoes off, put your liquids in a bin by themselves, and embrace invasive security measures at their finest. In short: brute force, TSA-style.

Posted 5/27/12 by Yoel Roth. Comments. 1 note.