The Invisible Humans Who Sanitize the Internet
17:21 minutes
When you use the internet, do you think about about the things you’re not seeing? Take Facebook, for example: It’s a platform that relies entirely on users generating content. There’s no pause between you uploading a photo and it appearing for your contacts to see. And yet, how often have you seen something disturbing — graphic sexual imagery, for instance, or violent videos from war zones around the globe? If the answer is “never,” you can probably thank a content moderator, one of the many workers at social media companies quietly vetoing images that don’t conform to the platform’s content guidelines.
To give you an idea of the scope of how much user-generated content there is to sort through, here’s one statistic: In 2014, YouTube users were uploading 100 hours of video content to the site per minute. And while there have been advances in A.I. capable of sniffing out offensive or harmful content such as child pornography, racism, or online extremism (Science Friday talked to the developer of one such tool earlier this year), most of the work of scrutinizing still falls to human beings — and an increasing number of them, according to some experts.
But how do people endure a job that requires them to look at and make decisions about imagery that is illegal or too graphic for the rest of us? Two former content moderators at Microsoft are suing the company, claiming the work gave them PTSD. (Microsoft has responded that its content moderators are given tools to minimize the emotional harm, as well as company-mandated counseling.)
Los Angeles-based Rochelle LaPlante is one such worker. She says that as an independent contractor, her work is low-paid. She also often has to choose between working at all, and working for a company that has graphic images to moderate.
Sarah T. Roberts, an assistant professor of information studies at the University of California-Los Angeles, has been researching the work of content moderators for years. She describes the growing outsourcing of content moderation — and the emotion that comes with it — to less wealthy nations, and why we haven’t replaced these workers with artificial intelligence or automation.
Sarah T. Roberts is an assistant professor of Information Studies at the University of California-Los Angeles in Los Angeles, California.
Rochelle LaPlante is an independent content moderator in Los Angeles, California.
MANOUSH ZOMORODI: This is Science Friday. I’m Manoush Zomorodi. And when I’m not subbing for Ira Flatow, I host a podcast about technology called Note to Self.
And as you can imagine, I talk to a lot of my listeners on social media, maybe even you. And mostly, these online platforms are safe and well-tended places to be with lots of news stories, pictures of delicious food, personal updates.
We don’t expect to see gruesome or disturbing videos or photos of things like beheadings or child pornography, murders, or war zones. And to make sure you don’t have disturbing content in your feed, social media companies hire human beings. They’re called content moderators, and their job is to inspect and remove any pictures or videos that don’t fit company guidelines.
We don’t have to see it. But these people looking at all this inappropriate content all day long do. How this work affects content moderators’ mental health is now at the center of a lawsuit filed against Microsoft. Two former content moderators say filtering pictures and videos all day long, which included reporting child pornography to the authorities– they say it gave them PTSD.
And here to explain why this work cannot be automated yet and what it entails are my two next guests. Rochelle Laplante has been a content moderator for over a decade. She’s based in Los Angeles, California, and joins us by Skype. Hi, Rochelle.
ROCHELLE LAPLANTE: Hi.
MANOUSH ZOMORODI: And Sarah Roberts is an assistant professor of Information Studies at UCLA. Welcome to Science Friday, Sarah.
SARAH ROBERTS: Hi. Thank you.
MANOUSH ZOMORODI: So listeners, if you have questions about how content is moderated online, or if you’re worried about something else, like the power that content moderators have, do give us a call. We are at 844-724-8255. That’s 844-SCI-TALK. Or you can tweet us @scifri.
So Rochelle, let’s start with you. You moderate content for a living. What does your average workday look like?
ROCHELLE LAPLANTE: It kind of varies. It’s kind of depending on what’s available. I work on a freelance platform that a wide variety of companies use to have text and images and videos moderated. This is usually content that’s submitted by users on the site or to the app.
So it’s my job to go through and review this content and see if it violates any of the guidelines for that particular company. And I have to do this really quickly because it’s paid per image. So I have to balance doing it fast enough to make it worth my time but also make sure I’m doing high-quality work at the same time. So it’s like modern-day piecework but with the added layer of psychological stress.
MANOUSH ZOMORODI: Yeah, before we go into that part, does it pay well? I mean, is this worth your while?
ROCHELLE LAPLANTE: It kind of depends. Some companies understand that it’s difficult work and pay well, and others just don’t. And sometimes it’s a penny an image, and I have to make the decision about whether I want to spend my time doing that, and if I can make it worth my time, or if it’s time to go find something else to do with my day.
MANOUSH ZOMORODI: So can you just talk us through an example of when you saw something that did not fit the guidelines of a company. What did you see, and then what happens next?
ROCHELLE LAPLANTE: The what-happens-next part is really difficult because on the platform that I work on, we tag the images that don’t fit the company guidelines. But then we don’t have any follow-up to know if anything was done about it.
So, for example, if I see some child pornography, and I tag it as such, I don’t know if that’s ever reported to the authorities, if the company takes action on it, if these children are helped. There’s just no follow-up that I am understanding or that I see to know what happens with these images.
MANOUSH ZOMORODI: I mean, I almost hate to ask this question. But what’s the worst imagery that you’ve seen?
ROCHELLE LAPLANTE: Child pornography.
MANOUSH ZOMORODI: And do you know who you’re working for, though, when you’re doing this work?
ROCHELLE LAPLANTE: No, not at all. The way the platform works is that the companies who post these image sets can use any anonymous name they want, so we really have no idea who we’re working for or why we’re doing this work.
MANOUSH ZOMORODI: So Sarah, I want to go over to you. You are researching content moderation and have spoken to a lot of content moderators. I think a lot of people– and I know I thought this– thought that it was the algorithms that flag stuff, that robots did this work.
SARAH ROBERTS: Yeah, I’ve been looking at the large-scale industrial practice of content moderation since about 2010. That’s when I first became aware of it.
At that time, I had been on the internet myself as a pretty prolific user for almost 20 years. And I, too, had never stopped to think about the need for this kind of large-scale, for-pay work done in an organized fashion.
So I had two experiences that were repeated over and over again when I brought this up with very knowledgeable scholars, researchers, and other long-term users. The first thing that people would say was, huh, I never thought about that. And then they’d kind of take a pause for a moment, think about it, and then the next question would be, well, don’t computers do that?
So that’s a pretty common reaction– and I think due in no small part to the fact that the social media industry is predicated on notions of algorithms and smart kinds of computer filtering and other kinds of automated activities going on behind the scenes. But I think the actual social media production cycle is much more complex than that.
MANOUSH ZOMORODI: Can you just give us a sense of the scale? How many people are doing this work? Where are they doing it? Rochelle, we know you’re in LA. Is it mostly here in the States, or is it all around the world?
SARAH ROBERTS: Yeah, that’s a great question. Rochelle has described her work environment working for a microlaborer platform, and that is one of several sectors in which this work takes place.
Certainly, a number of large-scale, well-known social media technology companies have workers on site who do this work. In many cases, however, those workers tend to be contract laborers. So despite the fact that they’re working at the headquarters in Silicon Valley for one of these major firms, they still may not have full employee status. And that can actually really matter when it comes to things like health insurance.
Other workers might be working somewhere around the globe on the other side of the world, in places like the Philippines, working in call center environments– although it’s important to note that there are call center workers doing commercial content moderation in the United States in places like Iowa. There are workers in Europe who do this. So it’s really a global practice, and it stands to reason since social media is a 24-by-7 operation.
To answer your question about scope, it’s very difficult to have any kind of specific concrete numbers about how many people are involved in this work. But one thing I like to do is look at the user-generated content side to get a sense for the need.
In 2014, YouTube was reporting that it was receiving 100 hours of user-generated content to its platform per minute per day. So if you think about that kind of volume on just one of the major platforms, it becomes apparent that, in fact, the need to have some type of intervention and control over user-generated content is actually paramount. In many cases, people within these industries have described it to me as mission critical, which really raises a number of questions as to why it’s so invisible.
MANOUSH ZOMORODI: It’s so interesting. As a former foreign news reporter and producer, that was one of my jobs out in the field to go through the tape that we brought back from war zones, among other places, and make sure that certain images didn’t get out. But what you’re talking about is a scale way beyond what I was dealing with with TV news, right?
SARAH ROBERTS: Yeah, that’s certainly true, and I think it’s important to recognize a couple of things, Manoush. I think your link to the kind of work you did is quite important to make.
Of course, in the case of commercial content moderators, unfortunately, they don’t have the kind of social support and status that journalists so often do– certainly not the professional ethics and mutual recognition from other people in the field and those who consume the material.
But also, I should point out that when we’re talking about commercial content moderation, really what is being dealt with is really a small subset of all of that content being produced. And much of it doesn’t even come to the attention of the commercial content moderators themselves until someone like you or like me or any other user of the platform stumbles across that material and is exposed to it first, and then initiates that process or review by flagging the content, sending it to some sort of moderator or what have you. So in many ways, users are implicated very profoundly in this process, as well.
MANOUSH ZOMORODI: These two former workers from Microsoft who filed a lawsuit against the company late last year– they say that their work tracking down child pornography left them with PTSD, hallucinations, permanent disability.
I just want to mention that we reached out to Microsoft for a statement, and the spokesperson said that the company uses filtering technology to distort images and reduce their psychological impact. They also provide and require psychological counseling for content moderation workers. And ultimately, they say that they disagree with the plaintiffs’ claims, and they take employee health and resiliency pretty seriously.
But Rochelle, you don’t get any sort of support for the work that you’re doing at home, presumably.
ROCHELLE LAPLANTE: That’s right. Yeah, the freelance platform that I work on doesn’t provide any support or any kind of health insurance or benefits or anything of that sort for the workers.
All of the support that we have is completely worker based, and it’s workers getting together and providing each other with informal support. But that’s the extent of it.
MANOUSH ZOMORODI: And what do you say to each other? I mean, are there some days where you’re just like, I’ve got to close down the laptop?
ROCHELLE LAPLANTE: Absolutely. Yes, definitely. It’s just a lot of talking about what did you see today, and how was it difficult? And sometimes it’s just sharing cat photos and funny videos on YouTube to get through it– things to get your mind off of it– and providing social support to each other.
MANOUSH ZOMORODI: And Sarah, when you hear about this lawsuit– the workers having PTSD– does that seem like a reasonable complaint to you? And did Microsoft do everything– if it says it did what it says it did– is that everything they could do to minimize the harm?
SARAH ROBERTS: I’m not inside at Microsoft. And throughout the course of my research– up to and including the piece that I wrote for The Atlantic this week– I’ve had a great deal of trouble getting transparency from the platforms themselves. So it’s really hard to know, from Microsoft’s point of view beyond that statement, how those things translated in terms of worker well-being.
One thing we know about this Microsoft case, which is quite interesting to follow, is that the workers started there back in 2007. So it isn’t clear to me how many of the filtering techniques that can be employed and some of these other practices that Microsoft articulated in its statement were actually in place at that time.
I also wonder what portion of the content that the workers were exposed to was actually material that couldn’t be filtered out, that was new user-generated content, didn’t exist in a known database already and, therefore, had to be reviewed by a human.
And then, of course, the question remains, to what extent is this material harmful, and at what point does the material become too much? I don’t think that is clear or known whatsoever.
So is it the fact that you could see one too many videos, and that’s too much for you to have been exposed to, and then you become disabled from the work? Could it happen that you just see one particular video that’s too much for you to take? We don’t know.
I think that’s what makes this case so novel. And some of these kinds of blanket statements about best practices and things being in place aren’t clear to me that those are necessarily sufficient.
MANOUSH ZOMORODI: Sarah, I just want to take a call, if that’s OK.
SARAH ROBERTS: Sure.
MANOUSH ZOMORODI: A content moderator actually calling in. Is that Jake from Iowa?
JAKE: Correct.
MANOUSH ZOMORODI: Hi, Jake. Tell us what you do.
JAKE: Sure. I’m a corporate contract– corporate content mod for a very large social media company, where I’m constantly going through pictures every day and paid by the picture. I’m actually an independent contractor for this company. And I’ve been doing it for six years.
And you just, essentially, become numb to it. I often compare to being a detective or a coroner, honestly, at that point, because you’re so numb to seeing some of these very, very graphic pictures that it just becomes a daily thing. We see thousands of things that aren’t wrong, but then you do see one or two things regularly that you do have to report.
Obviously, the job’s not for everybody. I’ve definitely seen people become mentally unstable doing it. But at the same time, it doesn’t happen to everyone.
MANOUSH ZOMORODI: Jake, thank you so much. I’m Manoush Zomorodi. This is Science Friday from PRI– Public Radio International.
Sarah, as you hear Jake describing the work, and as you’ve seen the work being done, what place does this set of practices leave for a political speech that looks like pornography or murders or other unwanted content?
SARAH ROBERTS: Certainly the issue of commercial content moderation going on on social media complicates the notion of these platforms existing as sort of free-expression sites for various democratic kinds of engagement. I can give you an example of this.
Certainly, if you look at the community guidelines or other kinds of rules and engagement on many of the major platforms– things like violence to children, blood and gore, excessive, disturbing material, and so on– is precluded on these sites.
But in the cases that I was looking at, the CCM workers I talked to discussed the fact that they were constantly receiving material from Syria, which is a well-known, horrifying war zone. And the material that they were receiving certainly contravened all those guidelines.
And yet, the policy group that was above them in their particular platform– all of whom were full-time employees– made the decision to allow that material from Syria to stand, because it had, in their opinion, an advocacy goal. Certainly seems laudable and makes sense in context.
But what the worker in this case pointed out to me is that he also saw all kinds of other material coming from different types of war zones all over the world that was disallowed from being put online. And so he made the pretty apt and astute connection that, whether or not it was an intentional decision at the policy level, there seemed to be a mechanism that allowed material to stand that was, in fact, in line with US foreign policy in that particular region. And other material that would have been less favorable didn’t stand.
So this certainly has– we know that social media material has immense political implications. And we know that coming out of the 2016 election, if for no other reason. We’ve talked a lot about fake news and these kinds of things. But I think the CCM work and the role that these individuals have in curating and gatekeeping content online is quite underdiscussed.
MANOUSH ZOMORODI: Rochelle, we don’t have a lot of time left, but I’m wondering, what would you like listeners to know about the work that you do, or what it’s like to be when you’re online and you think Facebook just shows up like that?
ROCHELLE LAPLANTE: I think the most important thing for me that I want people to know is that it’s not computers and it’s not AI that are doing this work. It’s actual human beings. And when you’re scrolling through your Facebook feed, or your Twitter feed, or whatever social media you’re using and not seeing those images, just to take a moment to realize that there’s humans that are doing that and making them appear that way. And it’s not some computer system that’s handling it all for you.
MANOUSH ZOMORODI: And Sarah, any last word before we go?
SARAH ROBERTS: I think I just want to acknowledge the fact that many of these workers are under nondisclosure agreements that are quite severe, and the penalties of violating them usually are termination, if not other consequences. So I really appreciate the workers who I’ve spoken with who are willing to talk about these practices so we can know about them. If nothing else, we know, if this were all automated, computers don’t violate NDAs.
MANOUSH ZOMORODI: Many thanks to Rochelle LaPlante, a content moderator in LA, and Assistant Professor of Information Studies Sarah T. Roberts.
Copyright © 2017 Science Friday Initiative. All rights reserved. Science Friday transcripts are produced on a tight deadline by 3Play Media. Fidelity to the original aired/published audio or video file might vary, and text might be updated or amended in the future. For the authoritative record of ScienceFriday’s programming, please visit the original aired/published recording. For terms of use and more information, visit our policies pages at http://www.sciencefriday.com/about/policies/
Christie Taylor was a producer for Science Friday. Her days involved diligent research, too many phone calls for an introvert, and asking scientists if they have any audio of that narwhal heartbeat.