The art of making perfumes and colognes hasn’t changed much since the 1880s, when synthetic ingredients began to be used. Expert fragrance creators tinker with combinations of chemicals in hopes of producing compelling new scents. So Achim Daub, an executive at one of the world’s biggest makers of fragrances, Symrise, wondered what would happen if he injected artificial intelligence into the process. Would a machine suggest appealing formulas that a human might not think to try?
Daub hired IBM to design a computer system that would pore over massive amounts of information—the formulas of existing fragrances, consumer data, regulatory information, on and on—and then suggest new formulations for particular markets. The system is called Philyra, after the Greek goddess of fragrance. Evocative name aside, it can’t smell a thing, so it can’t replace human perfumers. But it gives them a head start on creating something novel.
Daub is pleased with progress so far. Two fragrances aimed at young customers in Brazil are due to go on sale there in June. Only a few of the company’s 70 fragrance designers have been using the system, but Daub expects to eventually roll it out to all of them.
However, he’s careful to point out that getting this far took nearly two years—and it required investments that still will take a while to recoup. Philyra’s initial suggestions were horrible: it kept suggesting shampoo recipes. After all, it looked at sales data, and shampoo far outsells perfume and cologne. Getting it on track took a lot of training by Symrise’s perfumers. Plus, the company is still wrestling with costly IT upgrades that have been necessary to pump data into Philyra from disparate record-keeping systems while keeping some of the information confidential from the perfumers themselves. “It’s kind of a steep learning curve,” Daub says. “We are nowhere near having AI firmly and completely established in our enterprise system.”
The perfume business is hardly the only one to adopt machine learning without seeing rapid change. Despite what you might hear about AI sweeping the world, people in a wide range of industries say the technology is tricky to deploy. It can be costly. And the initial payoff is often modest.
It’s one thing to see breakthroughs in artificial intelligence that can outplay grandmasters of Go, or even to have devices that turn on music at your command. It’s another thing to use AI to make more than incremental changes in businesses that aren’t inherently digital.
AI might eventually transform the economy—by making new products and new business models possible, by predicting things humans couldn’t have foreseen, and by relieving employees of drudgery. But that could take longer than hoped or feared, depending on where you sit. Most companies aren’t generating substantially more output from the hours their employees are putting in. Such productivity gains are largest at the biggest and richest companies, which can afford to spend heavily on the talent and technology infrastructure necessary to make AI work well.
This doesn’t necessarily mean that AI is overhyped. It’s just that when it comes to reshaping how business gets done, pattern-recognition algorithms are a small part of what matters. Far more important are organizational elements that ripple from the IT department all the way to the front lines of a business. Pretty much everyone has to be attuned to how AI works and where its blind spots are, especially the people who will be expected to trust its judgments. All this requires not just money but also patience, meticulousness, and other quintessentially human skills that too often are in short supply.
Last September, a data scientist named Peter Skomoroch tweeted: “As a rule of thumb, you can expect the transition of your enterprise company to machine learning will be about 100x harder than your transition to mobile.” It had the ring of a joke, but Skomoroch wasn’t kidding. Several people told him they were relieved to hear that their companies weren’t alone in their struggles. “I think there’s a lot of pain out there—inflated expectations,” says Skomoroch, who is CEO of SkipFlag, a business that says it can turn a company’s internal communications into a knowledge base for employees. “AI and machine learning are seen as magic fairy dust.”
Among the biggest obstacles is getting disparate record-keeping systems to talk to each other. That’s a problem Richard Zane has encountered as the chief innovation officer at UC Health, a network of hospitals and medical clinics in Colorado, Wyoming, and Nebraska. It recently rolled out a conversational software agent called Livi, which uses natural-language technology from a startup called Avaamo to assist patients who call UC Health or use the website. Livi directs them to renew their prescriptions, books and confirms their appointments, and shows them information about their conditions.
Zane is pleased that with Livi handling routine queries, UC Health’s staff can spend more time helping patients with complicated issues. But he acknowledges that this virtual assistant does little of what AI might eventually do in his organization. “It’s just the tip of the iceberg, or whatever the positive version of that is,” Zane says. It took a year and a half to deploy Livi, largely because of the IT headaches involved with linking the software to patient medical records, insurance-billing data, and other hospital systems.
Similar setups bedevil other industries, too. Some big retailers, for instance, save supply-chain records and consumer transactions in separate systems, neither of which is connected to broader data storehouses. If companies don’t stop and build connections between such systems, then machine learning will work on just some of their data. That explains why the most common uses of AI so far have involved business processes that are siloed but nonetheless have abundant data, such as computer security or fraud detection at banks.
Even if a company gets data flowing from many sources, it takes lots of experimentation and oversight to be sure that the information is accurate and meaningful. When Genpact, an IT services company, helps businesses launch what they consider AI projects, “10% of the work is AI,” says Sanjay Srivastava, the chief digital officer. “Ninety percent of the work is actually data extraction, cleansing, normalizing, wrangling.”
Those steps might look seamless for Google, Netflix, Amazon, or Facebook. But those companies exist to capture and use digital data. They’re also luxuriously staffed with PhDs in data science, computer science, and related fields. “That’s different than the rank and file of most enterprise companies,” Skomoroch says.
Indeed, smaller companies often require employees to delve into several technical domains, says Anna Drummond, a data scientist at Sanchez Oil and Gas, an energy company based in Houston. Sanchez recently began streaming and analyzing production data from wells in real time. It didn’t build the capability from scratch: it bought the software from a company called MapR. But Drummond and her colleagues still had to ensure that data from the field was in formats a computer could parse. Drummond’s team also got involved in designing the software that would feed information to engineers’ screens. People adept at all those things are “not easy to find,” she says. “It’s like unicorns, basically. That’s what’s slowing down AI or machine-learning adoption.”
Fluor, a huge engineering company, spent about four years working with IBM to develop an artificial-intelligence system to monitor massive construction projects that can cost billions of dollars and involve thousands of workers. The system inhales both numeric and natural-language data and alerts Fluor’s project managers about problems that might later cause delays or cost overruns.
Data scientists at IBM and Fluor didn’t need long to mock up algorithms the system would use, says Leslie Lindgren, Fluor’s vice president of information management. What took much more time was refining the technology with the close participation of Fluor employees who would use the system. In order for them to trust its judgments, they needed to have input into how it would work, and they had to carefully validate its results, Lindgren says.
To develop a system like this, “you have to bring your domain experts from the business—I mean your best people,” she says. “That means you have to pull them off other things.” Using top people was essential, she adds, because building the AI engine was “too important, too long, and too expensive” for them to do otherwise.
Once an innovation arises, how quickly will it diffuse through the economy? Economist Zvi Griliches came up with some fundamental answers in the 1950s—by looking at corn.
Griliches examined the rates at which corn farmers in various parts of the country switched to hybrid varieties that had much higher yields. What interested him was not so much the corn itself but the value of hybrids as what we would today call a platform for future innovations. “Hybrid corn was the invention of a method of inventing, a method of breeding superior corn for specific localities,” Griliches wrote in a landmark paper in 1957.
Hybrids were introduced in Iowa in the late 1920s and early 1930s. By 1940 they accounted for nearly all corn planted in the state. But the adoption curve was nowhere near as steep in places like Texas and Alabama, where hybrids were introduced later and covered about half of corn acreage in the early 1950s. One big reason is that hybrid seeds were more expensive than conventional seeds, and farmers had to buy new ones every year. Switching to the new technology was a riskier proposition for the farms in these states than in the richer and more productive corn belt of the Midwest.
What Griliches captured, and what subsequent economists confirmed, is that the spread of technologies is shaped less by the intrinsic qualities of the innovations than by the economic situations of the users. The users’ key question is not, as it is for technologists, “What can the technology do?” but “How much will we benefit from investing in it?”
Today machine learning is undergirding every aspect of the operations of companies like Facebook, Google, and Amazon and many startups. It’s making these companies exceptionally rich. But outside that AI belt, things are moving much more slowly, for rational economic reasons.
At Symrise, Daub thinks the perfume AI project fell into a sweet spot. It was a relatively small-scale experiment, but it involved real work for a fragrance client and wasn’t a mere lab simulation.
“We’re all under a lot of pressure,” he says. “No one really has time to do greenfield learning on the side.” Yet even this required a leap of faith in the technology. “It’s all about conviction,” he says. “There’s a very strong conviction in me that AI will play a role in most of the industries we see today, some more predominantly. To completely ignore it is not an option.”