Intelligent Machines
Light Chips
Combining optical and electrical circuits should speed supercomputers.
Source: “CMOS Integrated Silicon Nanophotonics: Enabling Technology for Exascale Computational Systems”
William Green et al.
SEMICON, December 1-3, 2010, Tokyo, Japan
Results: IBM researchers used standard fabrication methods to create a silicon chip that incorporates silicon photonics alongside conventional electronic transistors. These optical components can pipe data into the chip as a light signal, convert it into an electrical signal that can be processed by conventional components, and then convert it back into light to be sent out of the chip.
Why it matters: The speed of supercomputers is constrained not by processing power but by limits on how fast data can travel down the electrical wires that link up different chips. Light signals move significantly faster than electrical ones, so using them could remove that bottleneck. While other groups have made silicon components that can process light, their designs cannot usually be integrated into the standard manufacturing processes used to make a chip’s transistors.
Methods: Light-processing components are typically much larger than electrical ones, so the researchers tried to shrink them as much as they could to keep the overall chip’s design compact. One important modification was to drastically reduce the thickness of the germanium in a photonic component that detects light signals. The material is required to efficiently absorb light, but too much germanium would cripple nearby transistors by changing the behavior of the electrons that flow through them.
Next steps: So far the chips have been made only in a lab, but the IBM team is working to make them in a commercial foundry to prove that they can be manufactured cheaply and in large volume.
Predicting Popularity
Mapping the popularity of tweets and blog posts foretells the fate of future posts
Source: “Patterns of Temporal Variation in Online Media”
Jaewon Yang et al.
Proceedings of the ACM International Conference on Web Search and Data Mining, February 2011
Results: Researchers at Stanford University built a model that can predict, with 75 percent accuracy, when a new piece of online content’s popularity will peak and how long it will last.
Why it matters: The ability to predict how widely a news story or tweet will travel could help identify the most influential blogs and Twitter posters, providing clues to who might be able to disseminate an important piece of information most broadly. Websites could use the predictions to position their content and advertising, possibly increasing click-through rates.
Methods: The researchers analyzed 170 million news articles and blog posts over the course of a year, as well as 580 million Twitter posts over eight months. They measured the attention each piece of content received by tracing how much it was mentioned elsewhere over time. They found that they were able to graph these patterns in a small set of distinct shapes. Some stories spike rapidly and then fall off, making a sharp, pointed shape. Others have more staying power, rising and falling more gently. Observing early response to a new piece of content allows the researchers to predict what shape the graph of its influence will take and, thus, to predict its popularity and staying power.
Next steps: The researchers are investigating when and how errors are introduced into accounts of news stories and how content changes as it travels—for example, when quotes from public figures are dispersed. They are also trying to understand the networks by which information spreads, determining the exact path it takes across the Internet. These findings could help trace information to its source and reveal which sites are truly influential.