Google Glass, a compact computer fitted onto a pair of slim metal eyeglass frames, is an impressive technical achievement. But can it be a business?
Glass is the pet project of Google’s cofounder Sergey Brin. The compact frames have a boom on one side that hides a camera, a battery, motion sensors, a wireless connection to reach the Internet, and other electronics. That boom also contains a small display, the light from which is directed into a person’s eye by a thumb-size prism positioned just under his or her right eyebrow.
Google has shown off video and crisp photos captured by trapeze artists, skydivers, and supermodels wearing Glass prototypes like those it first unveiled in April 2012. Recently the company posted a show reel in which people used voice commands to order Glass to take images and send messages.
But just how this R&D project might become a popular product and a significant contributor to Google’s bottom line remains fuzzy. Clearly, anyone who can reinvent the mobile computing experience has everything to gain. Apple proved that with its iPhone and tablets.
Yet for Google to turn Glass into a similar commercial coup, the company will have to negotiate challenges in fashion, design, and human relationships that lie outside its previous experience. Google, which says it plans to start selling Glass this year, declined to comment for this article.
Making Glass affordable to consumers will be the easiest part. The device may look unique, but it will mostly be a remix of compact electronic components now standard in smartphones, and it should cost about as much as a smartphone to make.
“We put the average prices of smart glasses, not just Google’s, at $400,” says Theo Ahadome, an analyst with IHS Insight, which strips down phones, tablets, and other devices to estimate their costs.
Persuading large numbers of people to put the device on their faces will be a far bigger challenge. Blake Kuwahara, an eyewear designer who has created glasses for Carolina Herrera and other fashion houses, says Google will have to reinvent its product to succeed as fashion, not just a computer for your face.
To judge from Google’s prototypes, “it’s clear that this device was designed by industrial designers,” says Kuwahara. “To make this something that someone will want to wear full time, there need to be adjustments to the aesthetics and styling—it reads as a device and not a pair of fashion eyewear.”
It also remains unclear what Glass’s killer app will be. Google has floated some ideas—people could use the technology to get directions while traveling, or to share video of experiences such as roller-coaster rides with friends in real time. Those videos make for great TV coverage of Google’s prototype, but the value to most people is uncertain, since most everything you can do with Glass you can do with a smartphone, and probably more easily.
Perhaps recognizing the dilemma, Brin has openly sought help generating more ideas for how to use the product, and he’s also taken digs at the competition. During the TED conference in late February, he called smartphones “emasculating” because their users are “hunched up, looking down, rubbing a featureless piece of glass.” By contrast, Glass would “free your eyes,” he said (see “Sergey’s Android-gynous Moment”).
Last June, Brin appealed to software engineers attending Google’s annual conference for outside developers, inviting them to pay $1,500 for prototypes to experiment with (these early “Explorer” models have yet to ship). After signing nondisclosure agreements, some developers attended closed-door meetings last month in San Francisco and New York to get their first experience with the technology.
Hardly any software programmers have experience developing for something like Google Glass, and doing it well will require throwing out some fundamental conventions of today’s computers, says Mark Rolston, chief creative officer at Frog Design, a design firm that has worked with many consumer technology companies.
Today, people treat mobile computers like tool boxes, says Rolston, digging out individual tools—applications—to achieve particular tasks. “If you’re wearing a computer, that application model needs to go away,” he says. “Instead, it needs to be cued by the outside world so it feels like natural life, not interacting with a computer.”
Google’s limited demonstrations of Glass suggest that the company agrees. The glasses do have a touch pad on the side for scrolling through menus, but in Google’s demonstrations, users are shown calling out “Okay, Glass” and then saying a command such as “Take a picture.” Google’s Android mobile operating system for smartphones has also been shifting away from an app-centric approach. Google Now, a core feature of the latest version of Android, offers live arrival and departure times when a person goes near a transit stop (see “Google’s Answer to Siri Thinks Ahead”), an approach well suited to Glass.
Those same techniques may also be suited to mixing in targeted ads, although the leader of the Glass project, Babak Parviz, said in January that he had no plans for ads to appear on the device.
The least predictable part of Google’s task is to make Glass as acceptable to people who aren’t wearing it as it is to those who are. Looks aside, the way people wearing Glass behave will be crucial, says Rolston. For example, talking with or even paying attention to other people while information streams directly into your field of view could be challenging.
“We’ll have to learn the social boundaries [of] ignoring someone when it looks like you’re engaged,” says Rolston. “Normal cues like taking out your phone will go away.”