Landon Speers

Connectivity

Terrible people have learned to exploit the internet. Yasmin Green is fighting back.

The Jigsaw team at Alphabet brings people who were radicalized online back from the brink, one video at a time.

Hate speech, online radicalization, bomb-making tutorials, state-sponsored fake news, web censorship—the internet is a terrible place.

A portion of it is, anyway. Helping to keep that portion as small as possible is the job of Yasmin Green, director of research and development at Jigsaw, an arm of Google’s parent company, Alphabet. She takes on terrorists and trolls alike with a multinational group of around 60 specialists, including software engineers, research scientists, and product managers. They can also tap into Google’s huge resources. But Green and her colleagues don’t spend all their time coding and building; many of them also travel to geopolitical hot spots to study radicalization up close. She spoke with MIT Technology Review about some of the group’s methods.

What drew you to the role at Jigsaw?

I left Iran, where I was born, at a very young age and grew up in the UK. I remember going back at the age of 19 and feeling astounded by the level of censorship there was in the country, both offline and online. Terms like “world wide web” tend to sound ironic to people who aren’t living in countries where information is free. So I was drawn to work at a company which tries to make information available to everyone.

Jigsaw deals with online threats, but team members also visit conflict zones as part of their work. What’s the goal of these trips?

We want to make sure that we’re designing technology that’s based on an understanding of human experiences. A big part of our methodology is sending team members out into the field to interview people who are on the front lines of the challenges we try to tackle, whether that’s repression or conflict. One of our field trips was to Iraq, where we sat face to face with people who had joined and then left the terrorist group known as ISIS. We wanted to understand the radicalization process, both the human elements but also the role that technology played. We heard about how people discovered ISIS, how people were recruited, and how technology was useful to the logistics of their travel [to join the group].

What’s the most important lesson we’ve learned about ISIS’s ability to leverage the internet for recruitment?
 
ISIS pretty much masters many media, from radio to leafleting. When it comes to the internet, they’ve really understood the power of microtargeting. They create content in a long list of languages, including Arabic and English, but it goes on and on, and even gets to Chinese and Hebrew. The language that really blew my mind to see a video in was sign language. So they are creating very local recruiting materials and using the algorithms that are available through social media to distribute this material and reach people in all corners of the world.

Some of the content terrorist networks post online clearly needs to be taken down. But how do we deal with more subtle forms of propaganda?

There are definitely categories of content that you want to make sure don’t see the light of day, like beheadings and bomb-making tutorials. Then there’s a whole host of other content that really isn’t advocating for violence but could help advance people down the path toward it. Our research was aimed at understanding what the recruiting themes were that got people to sign up to ISIS. It turns out they weren’t generally drawn to beheadings; instead, they were convinced that this group was religiously legitimate and the cause of jihad was their religious duty. ISIS was shaping the conversation, asking questions to which [it] had seemingly compelling answers.

How have you tried to counter this online radicalization?

One of the takeaways for us was that timing is critical. By the time potential recruits are sold on the ideology, it’s too late to influence them; you have to get to them when they are sympathetic but not yet sold. So we turned to targeted online advertising. We’ve designed something called the “redirect method,” which uses targeted ads to reach people who are sympathetic to ISIS but not yet committed to it, and redirects them to online videos from moderate clerics, defectors, and citizen journalists that could avert their radicalization.

What results has this method delivered?

The pilot, which was eight weeks long and ran in Arabic and English, reached 320,000 people. It had an exceptionally high click-through rate on the ads and drove half a million minutes of video watch time. Given people don’t spend more than a few seconds on a video they’re not interested in, that’s encouraging. After our pilot, YouTube integrated the method into its search results. The open-source methodology has also been replicated by others like the Gen Next Foundation, and we continue to support some new deployments.

Jigsaw also tries to tackle online censorship. How bad is this ­problem?

If you look at Freedom House’s index on this, they say that every year the situation is getting worse. That’s really discouraging, because it’s antithetical to what people who are developing the internet want for it. The situation is so volatile. When you have civil unrest, as there was recently in Iran, you see the dilemma of repressive governments around whether or not to shut down the internet because [censoring] it inflames the population and draws public attention and outrage. But if those who are in power feel threatened, they will censor.

What can companies like Alphabet do to counter this?

We feel a really big responsibility to help people get access to information, especially when there’s conflict and repression. One of our products focuses on protecting independent media around the world from a type of censorship attack called distributed denial of service, or DDoS, which knocks websites offline by flooding them with traffic. It’s called Project Shield. The idea for this came from our fieldwork. Our team spoke to the Kenyan election monitoring group, and their site had gone down on the day of a key election. Google has an enormous infrastructure and world-class DDoS mitigation capabilities, which Project Shield takes advantage of. Without websites having to host with Google or Jigsaw, it offers to vet traffic before it arrives at a server so we can spot the traffic that’s malicious and filter it out.

Let’s switch to the problem of fake news. How can we better identify state-sponsored disinformation efforts?

The goal of these campaigns is to plant ideas and narratives via fake personas and news sites, and then have these seeds fertilized by the masses so that the conversations look like they’re organic. The research question we’ve been asking ourselves is whether there are technical markers of coordinated, covert activity versus the more organic activity we expect to see online. If there are, we could use these in automated detection.
      
We’re looking at several dimensions here. Coordinated campaigns tend to outlast organic ones, and the people involved tend to wait to get instructions from their masters, so there’s sometimes a slight delay before they suddenly act together. Those actors also tend to be linked in extremely tight networks or clusters that look anomalous. There’s also a semantic dimension. These campaigns often use similar words and phrases that can be a signal of centralized control.

Web companies have been criticized for not doing enough to address online hate speech and harassment. Is that criticism fair?
 
I don’t think anyone imagined that we’d have the level of intimidation and hate speech online that we currently do, and we have to try really hard to make sure that we’re getting ahead of it.

What is Jigsaw doing to frustrate online trolls?
 
We have a team dedicated to seeing how natural-language processing and machine learning can be used to identify online toxicity and help moderators and communities in tackling it. There’s a model we’ve developed that’s publicly available, called Perspective, and you can find it at www.perspectiveapi.com. This helps you score comments for their level of toxicity. The research team is looking at ways to get to another level of granularity to help us better identify what’s happening and how moderators can control it.

There’s been a lot of concern that bias in algorithms could harm certain groups in society. Is this something you’re also focused on?

Yes. We’re spending a lot of time thinking about how we can ensure that we don’t have biases in our AI in Perspective that could be harmful to the goals of that project, which is to create inclusive and empathetic conversations.

You spend a lot of your life focused on the dark side of the internet. Are you still optimistic about its potential?

I am, but we need to keep developing innovative technologies that can help address these really hard challenges.