Two and a half years after Sergey Brin unveiled Google Glass with a group of skydivers jumping from a zeppelin above San Francisco, the computer you wear on your face is falling to its death. It’s still not a finished consumer product. It’s not even close to being something people yearn for, at least not beyond the Glass Explorers who each paid $1,500 for early access.
Although Google says it’s still committed to Glass, several companies, including Twitter, have stopped working on apps for it. Babak Parviz, the creator of Glass, left Google in July for a job as a vice president at Amazon, where he’s looking into new areas of technology. Even some of the early adopters are getting weary of the device. “I found that it was not very useful for very much, and it tended to disturb people around me that I have this thing,” says James Katz, the director of emerging media studies at Boston University’s College of Communication.
A lot of this is Google’s fault. Rather than spending years developing Glass in secret, Google trotted it out as an early “beta” product that was somewhat functional but finicky and literally in your face. It hoped that software developers would come up with killer applications and that the people wearing it would act as evangelists. Presumably, the strategy has led to some priceless insights for the next version—Google’s online Glass forum brims with questions and feature requests from early users. But as Katz noted, it caused a social backlash. The “explorers” have become widely known as “glassholes.” Why? The reasons are telling, and they help us understand where the technology could go next.
Glass didn’t fail only because it looks weird. Another big misstep was the aspect that Katz mentioned first. Glass annoyed other people largely because of its lack of utility: no one could understand why you’d want to have that thing on your face, in the way of normal social interaction. Glass does a handful of things—it can take videos, give you turn-by-turn directions, make phone calls, or search the Web—but it doesn’t do any of them all that well. It might have succeeded while looking weird if it let you do amazing things (the forthcoming Oculus Rift virtual-reality headset looks goofy, but people will eagerly put it on). Or it might have found more fans even if it didn’t do all that much—as long as it looked unobtrusive.
However, I can see how smart glasses will improve on both counts. The idea that Glass represents—allowing you to ingest digital information at a glance—remains powerful. Even though I gave up on wearing Google Glass pretty quickly, I did find it helpful in situations where I wanted to be online yet didn’t want to be interrupted—while cooking or cycling, for instance. I could easily look at the list of ingredients in a recipe by tilting my head upward, or shift my eyes to check my speed on a descent. A display in your line of sight can make for a better navigational tool or real-time language-translation assistant than a smartphone.
And far more intriguing possibilities remain. A device that could sense what you were doing at a given moment and serve up relevant information into your field of view could be incredibly useful as a memory aid and productivity enhancer. Those kinds of applications are always cited by wearable-computing die-hards like Thad Starner, a Georgia Tech professor and Glass technical lead who has been making and wearing these kinds of gadgets since 1993. (See my Q&A with Starner in July/August 2013.)
Researchers inspired by these prospects—and companies that make wearable devices for niche applications—are going to keep plugging away in hopes of getting to a point where the technology blends into the glasses themselves, rather than sitting so obviously atop them. So imagine that in a few years someone comes out with smart glasses that are pretty much unnoticeable. They have a tiny display in the lenses; the electronics and battery are neatly concealed in the frame. They’re operated easily with a few fairly inconspicuous touch gestures, eye movements, and, when appropriate, voice commands.
This version of the technology wouldn’t automatically irk people around you. And surely that would inspire software developers to have another try at creating applications that finally deliver the information-rich lifestyle Starner calls a “killer existence.”
Blending in
There are several ways the technology can be streamlined significantly.
There’s no ignoring the prism-like display on the current version of Google Glass. It juts out from the frame and sits just above your eyeball. When the display is on, other people can’t fail to see the bright little mirror image of what you’re looking at. Even when the display is turned off, rendering the prism a clear block in front of your right eye, it’s impossible to forget about. For a device like this to have a chance, it will need a display that is much more discreet.
One solution may be something like what’s in the works at Lumiode, a startup that uses LEDs to create microdisplays. Typically, LEDs serve as the light source at the rear of a display, and the light passes through filters to form the pixels that together create images. Lumiode eschews the filters. Instead, it uses individual LEDs as pixels by adding a layer of transistors to control how they emit light. Lumiode founder and CEO Vincent Lee says the technology could yield tiny displays that are 10 times brighter and more energy-efficient than other display technologies. That could make it easier to integrate a display into regular-looking glasses, cut down on clunky batteries, and make the glasses work better outdoors, too.
Lumiode is now focused on perfecting the process of fabricating the layer of transistors atop the LEDs without ruining the lights. Lee says the obtrusiveness of a Lumiode display that’s built into a pair of smart glasses will depend on a few factors, including the optics used in the glasses. Eventually, he says, it could fit into the frame.
A more radical approach to cutting down on smart glasses’ bulk may be to simply take the lens needed to magnify what’s on the display out of the glasses and bring it closer to the eye. A company called Innovega is doing this by developing contact lenses with a tiny bump that serves as a microscope for content that can be streamed from the inside of a pair of glasses. The lenses do nothing when you’re looking at the world around you, but when media is streamed toward your eyes from a projector or display panels built into glasses, it passes through the bump on each contact and comes into focus just in front of the eye. This offers the benefit of showing content to both eyes—and it can stay in focus as you move them.
Innovega showed off an early prototype of its technology, streaming high–definition content, at the 2014 International Consumer Electronics Show in Las Vegas. The glasses looked a lot like normal—albeit dorky—sunglasses, and chief executive Steve Willey says the company is developing a consumer contact lens. It plans to seek approval from the U.S. Food and Drug Administration in 2015.
Even if displays can be made practically invisible and much more energy-efficient, smart glasses will need battery technologies that can hold up to a full day of usage and eliminate the bulging batteries currently connected to Glass.
That probably will require a combination of breakthroughs. Software must be optimized to use power more frugally (already, the Glass team has made progress in this regard). And something like the thin, flexible, printed rechargeable batteries made by the startup Imprint Energy could be contained in the frames. These zinc-based batteries would eliminate some of the bulk typically associated with lithium-ion batteries, which require protective layers because they are sensitive to oxygen.
In addition, some sort of power harvesting could replenish the batteries throughout the day. A company called Perpetua Power is working on technology that uses body heat to produce electricity; in theory, your smart glasses could extend their battery life with tiny thermoelectric generators on places that touch your skin, such as the bridge or temple. For now, though, Perpetua’s module is much too big: one by two centimeters. And each one can generate only a bit of the power you’d need to run even a fitness-tracking wristband. Perpetua’s bracelet-like prototypes include eight to 10 modules.
Fashion backward
Google has tried hard to make Glass more fashionable. It formed a partnership with the world’s largest eyeglass maker, Luxottica Group, whose brands include Ray-Ban and Oakley. (Intel is also working with Luxottica on a smart-glass project.) It cozied up to designer Diane von Furstenberg, who designed a Glass frame and aviator-style shades that come in hues like “shiny lagoon” and “rose gold flash.”
Speaking on the sidelines of a Google-hosted design conference in San Francisco in November, Isabelle Olsson, the lead designer for Glass, said that while Google is always trying to make Glass as sleek as possible, getting people to wear a head-up display comes down to giving them cool frames and colors to choose from. She said the prospect of having more fashionable options “sounds kind of banal in a way” but is even more important than miniaturizing the technology.
“If you can pick the frame that you would normally pick and that you’re normally comfortable with, it’s going to look more like you,” said Olsson, who wore a matte black Glass during our conversation.
I didn’t expect Olsson to speak ill of Glass; she works for Google, after all, and as is true for a number of people at the company, Glass is her baby. She has managed to bring it miles from where it was when she started at Google in 2011: a prototype she described as a scuba mask with a phone attached to it and cables running to a backpack. But it’s wrong to say that stylish frames matter terribly much when it comes to luring more users. It’s a reminder that Google got it all backwards: after failing to give people very good reasons to wear computers on their faces, it failed to see that the devices could not possibly appeal to most people. Stylish frames can’t fix that; they will make a difference only after the technology dissolves into them.
I agreed with Olsson on one big point: it’s a numbers game. The more people out there who are wearing these things, the more normal it will seem, she reasons. Indeed, even regular glasses, which have been around in various forms for over 700 years, didn’t become fashionable until the last century.
The difference, though, is that glasses perform a valuable function. When a pair of smart glasses do too, their sense of style might actually matter.
Rachel Metz, MIT Technology Review’s senior editor for mobile technology, wrote about anonymity apps in the November/December issue.
This story was updated on December 17, 2014.