Intelligent Machines

From PlayStation to PC

Whether you play them or not, video games are good for you. These exercises in interactivity are spurring advances in interfaces and 3-D graphics that will benefit all computer users.

The school bell rings, and teenagers flood the hallways, heading for lockers, the lunchroom, their next classes. Many pull out Cybikos-popular handheld devices that are a combination personal digital assistant, wireless messenger and game machine. Some students update their calendars with the latest homework assignments, but others check on their Cy-Bs. While the kids have been studying math, history and science, these colorful cartoon creatures have been eating, working, playing together, paying taxes-even breeding-in CyLandia, their virtual game world. The game’s goal is to raise happy, productive Cy-Bs that live long and prosper; players accomplish this by training the Cy-Bs, sending them over a local wireless network to visit other players’ Cy-Bs to improve their social skills, and helping them find jobs.

This is the new face of video gaming-mobile, networked, interactive and remarkably lifelike. More to the point for society at large, its rapid adoption by a generation of young computer users may herald aspects of the future of computing in general-from PCs, to personal digital assistants like the Palm, to cell phones. You may soon be able to take a virtual walk through your computer’s contents, interact with scores of people in real time and send artificially intelligent agents out to do your bidding; and if you do, you will owe a word of thanks to game devices like Cybikos. Indeed, games have long played a special role in driving computing. “The segment of software that has pushed hardware development most is games,” says game developer Bernard Yee, former director of programming at Sony Online Entertainment.

This influence seems to be accelerating. The 2000 U.S. census found that 54 million American households currently have computers-and Yee says that gaming is now “arguably the number one use” of those machines. “It’s reported as the number two use, behind word processing,” he says. “But people don’t like to admit that they play games.” Boston-based consulting firm IDG estimates that North Americans will own over 72 million dedicated game consoles by 2004-be they Sony PlayStations, Nintendo GameCubes or Microsoft’s new Xboxes. All this game play is likely to influence younger users’ expectations for their other computing experiences. Real-time networking, 3-D graphics, interactive interfaces, artificial-intelligence systems and the computerized home of the future will all reflect the synergy between gaming and other areas of computing. “The next generation of people that are going to be using [computers] are much more familiar with this sort of stuff and are that much more comfortable with it,” says Steven Drucker, a researcher in the Next Media group at Microsoft Research. And, he notes, they will likely demand the same technologies and user experiences from other computing devices as well.

In short, games point to where computing is headed.

Graphic Roots

Gaming and other areas of computing-business, academic research, the Internet and more-have had a symbiotic relationship since the early days of video games, with hardware and software developments frequently crossing from one field to another and driving the evolution of both.

Home game consoles, introduced in the early 1970s, preceded the home computer revolution of the 1980s. In fact, devices like the Magnavox Odyssey (see “Video Game Odyssey,” p. 96), the first successful game console, and the blockbuster Atari 2600 VCS were computers that novices could easily set up and hook to their TVs. The idea of using a TV as a graphical display persisted in early home computers like the Atari 400 and 800, the Commodore 64 and the Apple II. Although the television didn’t end up being a great computer display, it helped computers gain a foothold in the home. Meanwhile, the games played on these early systems made graphics and sound capabilities more common and therefore affordable, fast-forwarding the development of other uses of graphics, in areas like desktop publishing. “The first computer that many homes had was a game console,” says Trip Hawkins, founder of leading game maker Electronic Arts and now CEO of another game company, 3DO. “The video game has gone a long way to demystify computer technology.”

Because of its implications for both games and computing, graphics innovation has proceeded at a torrid pace. It was impressive enough when computers, which used to draw spaceships as triangles shooting square projectiles at star-shaped enemies, offered gamers in the mid-1990s a first-person view of underground mazes, simulating the experience of walking through a blocky environment. Now, game consoles have marched into photorealistic 3-D-rendering, in real time, scenes as complex as a nighttime street with rain and puddles reflecting neon lights, which two decades ago would have taken the most powerful computers weeks to generate. Although computer-generated people may not pass for movie images of real actors just yet, the skin tones and ever smoother features of these 3-D models are starting to cause double takes. “We may only be two generations away from graphics being good enough that it doesn’t need to get any better,” says Hawkins, one of many game industry veterans who name graphics as the most important video game technology, past and present. “The video game is driving the demand for graphic computing. You wouldn’t even have graphics cards in PCs if it weren’t for games.”

New graphics capabilities, however, suggest new applications. Graphical user interfaces are one area where the influence of games may soon play a major role. Microsoft Research’s User Interface group, for example, has developed a new interface called Task Gallery to replace today’s computer “desktop” (see “The Next Computer Interface,” TR December 2001). In this 3-D virtual environment, users represent files and folders as pieces of art in a gallery. The 3-D space lets the researchers create visual relationships that help users remember where things have been stored.

George Robertson, who heads the Task Gallery group, notes that a key part of the effort is to create technologies that let users readily find their way around a 3-D environment by, say, reproducing the perspective shifts they would experience navigating through connected rooms or negotiating turns. Video game developers are often ahead of his group. “The computer science researchers who work on 3-D navigation techniques pay close attention to what goes on in the gaming community,” says Robertson. “There’s a real symbiosis.” He also believes that kids who are growing up playing games with 3-D environments will start demanding the same kind of interactivity from other computing applications. “The gaming community is definitely building a user population for us.”

Computing with Feeling

While games have been influencing computer graphics for over 30 years, their effects on other technologies are just emerging. Haptics, which adds the sense of touch to computing through force feedback and other mechanisms built into input devices like mouses and joysticks, is one discipline making the transition from gaming to widespread adoption (see “Touchy Subjects,” TR April 2001). “Haptic technology really made its first inroads in the gaming area,” says Bruce Schena, chief technology officer of Immersion, a tactile-feedback device maker whose software led the way toward haptic interfaces for the PC. “Now we’re seeing it show up more and more places, further into the mainstream.”

Haptic interfaces were first available to the public in the arcade. Sega’s 1986 OutRun was a driving game with a haptic twist: drive onto the shoulder, and the steering wheel trembled; crash, and it shook violently. But before 1996, PC games couldn’t include force feedback because Microsoft Windows didn’t have any way to output data to a controller. Then Immersion built a tool kit to help PC game makers add haptics to games, enabling players to feel various forces through a joystick, enhancing their experience and improving their control of simulated planes and cars. When Microsoft saw the first few games using the technology, it approached Immersion, and the two worked together to create tools to both help programmers and provide the necessary support in the operating system. Now, all main consumer haptic interfaces for the PC use the company’s technology.

First marketed to PC gamers in a special mouse that was fixed to a pad, Immersion’s technology has been integrated into the more ordinary-looking iFeel mouse from Logitech. Now haptic enhancements are available for Web sites and for Microsoft’s Word and Excel, allowing users to “feel” when they mouse over a link or select a button on a toolbar.

While the ability to feel a Web link may not seem especially enticing, Immersion is exploring the use of the same basic interface to let PC users experience other sensations, such as temperature or complex textures-a feat that could have practical implications for, say, comparing the fabrics of clothes at online merchants. It has also worked with other companies to create “streaming tactile content” for the Web; objects that users can pivot and play with visually today will be touch enhanced in the near future. And Schena says tactile cues will go even farther. Immersion’s research has shown that tactile cues become especially useful as visual interfaces get smaller and are used on the go, so it has developed haptic feedback technology for the “touch pads” used in laptop computers and is working on extending it to cell phones and touch screens for personal digital assistants. Says Schena, “We believe haptics will become an expected part of interfaces for all kinds of computing devices. Five years from now, for example, if you work on a PC that doesn’t have tactile feedback, you’ll think something’s broken.”

A.I. Gets Game

Beyond artificial touch comes artificial intelligence (see “A.I. Reboots,” p. 46), another field influenced by video games. This influence can be traced back at least to attempts to program computers to play chess better in the late 1950s. Microsoft’s Drucker points out that A.I. techniques developed then have since become widely used commercially-for instance, in airline route planning. In the last decade, the availability of cheap computer power outside big labs, coupled with the hunger for ever more realistic games, has prompted game developers to begin tackling artificial-intelligence questions once reserved primarily for academics. “The game industry is full of really bright, really well-read folks who are also pretty fearless,” says A.I. researcher Bruce Blumberg, who heads the MIT Media Lab’s Synthetic Characters group. “The combination means they’re doing things that are really interesting.”

Game developers have focused especially on finding ways to simulate the behaviors of humans and animals. A prime example is the award-winning game Black and White. Created by British game developer Peter Molyneux, the game offers a 3-D world in which the player takes the part of a god and trains a massive monster from birth, teaching it to either maim or assist villagers who call out for the player’s help. The game quickly caught the attention of Michael Macedonia, chief scientist and technical director of the U.S. Army Simulation, Training and Instrumentation Command, who keeps a close watch on the video game industry, frequently borrowing techniques for military simulations. “When Peter was doing a demo one time, and he started beating the ape into submission-the ape gets bruises-I had to remind myself that this was a video game,” says Macedonia.

Another case in point is Cybiko’s CyLandia game. The tiny program runs a complex economical model of the Cy-Bs’ world and can maintain several Cy-Bs with distinct personalities and social histories. The Cy-Bs also draw from software agent technology: the cartoon creatures perform most of their daily activities independently of their owners. “It’s really unlike other A.I. products out there: it’s thin, it’s small, and it’s robust,” says Cybiko president Don Wisniewski. In fact, the A.I. proved so effective that the company incorporated it into the Cybiko operating system, which other device manufacturers have expressed interest in licensing.

All this is not a one-way street, of course. The Media Lab’s Blumberg now regularly has his research group members attend game developer conferences-both to see what the gamers are up to and to share their own results. That interplay, he says, “is something that wouldn’t have happened five years ago.” The result is a synergy much like that found in the development of graphics, with each group furthering the work of the other.

The Virtual Society

Another force that could affect computing significantly is networked gaming, both wired and wireless. Online games are just hitting their stride, providing interaction on a scale no other system does, says Eric Zimmerman, cofounder of networked-game maker gameLab. The idea of connecting gamers in remote locations isn’t new: in 1985 Lucasfilm created Habitat, an online gaming world that ultimately hosted thousands of users connecting from modem-linked Commodore 64s. Wider commercial success for this format has come more recently, most notably with Electronic Arts’ Ultima Online in 1997 and Sony’s EverQuest in 1999. Hordes of new games like these have opened their virtual worlds to players internationally since then. At any given moment, hundreds of thousands of gamers are meeting online in these 3-D fantasy worlds complete with their own species, economies and laws.

An online game system of this sort joins the graphically intensive demands of entertainment software with issues like scalability-maintaining high-quality service whether one person or tens of thousands of people are connected-that are more often associated with business systems. Yee, who led the development of  EverQuest while at Sony, notes that although its system demands are not as strict as a financial network’s, people invest a lot of time in developing characters, and what is stored on the EverQuest server has real value (EverQuest and Ultima Online characters often sell for more than $1,000 on eBay). “You need a high degree of reliability,” he says.

Some of the technologies that went into building such systems are starting to make the transition to remote communication, education and videoconferencing. Microsoft’s Drucker has worked on fashioning networked virtual worlds that let bone marrow transplant patients at the Fred Hutchinson Cancer Research Center in Seattle-in isolation due to their weakened immune systems-play games, chat, even share virtual presents with their families and friends. Other efforts aim to bring networked game technologies to bear on education. “Games are very compelling; they can be addictive,” says Drucker. “And if we can harness that addiction for education purposes, then you’re going to have a wonderful synergy.” In one Microsoft project, gamelike simulations are being used to help children with mild autism develop better social skills. And Drucker hopes teachers who know education but not programming may soon be able to use software originally designed to simplify the creation of massively multiplayer games to  create networked virtual worlds to help demonstrate complicated concepts.

Videoconferencing applications are a bit further out, but Drucker and his Microsoft colleague Robertson say online worlds may influence the future of this field as well. A virtual conference using avatars or other graphical representations of participants would use less bandwidth than real-time video. Video interpretation technology could then be used to simulate participants’ facial expressions. Virtual meetings could also solve the so-called gaze problem: a participant looking at her computer screen is always staring in the same direction, while in actual meetings, people tend to look at whoever is speaking. Avatars could be directed to look at the speaker automatically. “All of that gets you closer to face-to-face interactions,” says Robertson.
The work in education and videoconferencing focuses mainly on fixed, wired networks. But wireless systems are also benefiting from game technology. Cybiko’s personal digital assistants join a screen, tiny keyboard, gaming controls and local radio-frequency wireless networking. Kids can download and play games (alone or with friends up to 100 or so meters away), send instant messages and use the digital-assistant features to schedule activities.

All of this is designed to get users interacting. Cybiko’s slogan is “stop playing with yourself.” And its kind of wireless networked gaming is making its way to other personal digital assistants, like Palm and Windows Pocket PC devices. Cybiko also recently teamed up with Nortel Networks and Motorola to offer downloads of its games onto some Motorola phones. Indeed, video games are a major factor in motivating cell phone makers to add color screens and make other improvements in their displays, maintains Cybiko founder and CEO David Yang. There are other incentives as well-multimedia applications like surfing the Web and storing and sending photos. But, Yang says, “Games will be a big part of that, maybe more than 50 percent of all multimedia experiences.” He also says that ad hoc local wireless networks of the sort formed by Cybiko handhelds could beat out mobile networks like Bluetooth and 802.11b for low-cost, low-power, short-range communications.

The Networked Home and Beyond

One of the most anticipated developments in the future of computing is the transformation of the game console from a stand-alone box hooked to the TV into the center of a revolution in home networking (see “The Future of TV,” TR November 2001). The latest game boxes bundle powerful processors, graphics and networking technologies in a stable and easy-to-use package, blurring the line between game machine and home computer. And as DVD capabilities are added in, these devices may spur the fabled convergence of different forms of media, from movies, TV and multiplayer games to the interactive Web. “You’re seeing this compelling use of either immersive environments or novel user interfaces that are first developed in many single-user games,” says Microsoft’s Drucker. “Then you take the addition of multiple users, and you put that all together with the way the computers are starting to be used as communication devices and information-disseminating devices, and you’re getting the new media.”

All this could be the first step toward controlling the home. Visionaries have long promised the day when a central computer controls all of a household’s functions-from alarms and lights to washer-dryer and entertainment-making it possible, for example, to use the Web to turn off the coffeepot that was accidentally left on, or to identify a freezer  component that’s having trouble before it fails (and melts the ice cream). 3DO’s Hawkins, for one, thinks that master computer might be the game box.

Hawkins calls the integrated DVD and networking features advanced by the PlayStation 2 when it was introduced in October 2000 “a watershed event” that could set the stage for the long-awaited household takeover. Even Microsoft, which maintains its commitment to the PC as the center of any home network, is preparing for this possible future-initially by giving the Xbox its own DVD and networking capabilities and, down the road, by expanding its involvement with other media. Sony in particular is fomenting the revolution, joining IBM and Toshiba to invest $400 million in developing a chip to power the PlayStation 3. Code-named Cell, it will process instructions in parallel, making it far faster and more powerful than today’s serial processors. In fact, Sony estimates that by the third generation, around 2005, chips in this family will exceed the power of their Intel Pentium contemporaries-giving PlayStation consoles the ability to do much more than play games. In announcing Cell, Sony Computer Entertainment president Ken Kutaragi said it “will raise the curtain on a new era of high-speed, network-based computing.”

This largely game-driven transformation of the home is a clear indicator of how gaming’s influence has spread beyond traditional crossover areas like graphics into almost every aspect of computing. Nonetheless, many game developers don’t view themselves as technologists at all. “There’s a computer in the equation when a game designer is creating a game, but that doesn’t have to be the focus,” says gameLab’s Zimmerman. “Creating a meaningful experience for players is not about technology.”

That, he says, is because the heart of video gaming is something that can’t be captured on a 500-million-transistor chip or in software: it’s the experience a developer sets out to create for game players. Happily for the rest of us, though, developing games with the players’ experience in mind often takes technology to its limits and provides new insights into what computers can do.

So let the games go on.