Content Moderation and Removal Conference Recap

By Olivia Manning-Giedraitis, 3L, Santa Clara Law

 

          In February, representatives from numerous Silicon Valley companies–including Google, Facebook, Wikimedia, Yelp, Pinterest, and others–gathered at Santa Clara Law to discuss content moderation and removal. To view the conference agenda, including topics and panelists, visit the conference website. This post will recap some of the day’s highlights.

          Historically, Internet companies have been tight-lipped about the details of what content they moderate and how they do it. However, Santa Clara Law Professor Eric Goldman said this attitude seems to have reversed 180 degrees. He said that the participating companies wanted the opportunity to discuss the matters publicly. Later, UCLA Professor Sarah T. Roberts said “the time is right to start having these conversations together.” The proceedings reflected this optimistic attitude about a previously ‘taboo’ subject.

          The conference began with a recorded video from Senator Ron Wyden of Oregon. He challenged Internet companies to step up their content moderation efforts and not neglect their obligation to protect the public. He made it clear that thinking websites can avoid taking responsibility for their content “isn’t going to fly.”

          After Sen. Wyden, Professor Goldman and Daphne Keller, Director of Intermediary Liability at the Stanford Center for Internet and Society, provided overviews of the domestic and international laws governing content moderation. Two major takeaways were that United States companies cannot take Section 230’s protections for granted, and the absence of Section 230 in other countries makes content moderation very complicated.

          The rest of the day consisted of six panels, covering the details of nine companies’ content moderation operations, the history and future of content moderation, employee welfare considerations, and more.

          When discussing their operational details, the participating companies were quite candid, though no one shared their removal algorithms! It was clear that each company had different approaches and attitudes toward content moderation. Attendees were quite keen to hear the details. There was a lot of note-taking during the presentations, especially Facebook’s and Google’s.

          Not surprisingly, the larger companies’ operations reflected their sizable resources, while the smaller companies explained how they handled all content issues with a small team and a limited budget. The companies’ operations also reflected differences in their business models. For example, in contrast to several of the other participating companies, Dropbox does not show advertising on user content, and this reduces the volume of issues Dropbox must contend with. As another example, Paul Sieminski of Automattic (WordPress) advocated against automated content removal process, but Wikimedia relies heavily on bots to detect and fix article vandalism.

          Throughout the day, questions kept coming up about how Internet companies can improve their content moderation functions. Participants uniformly agreed that it was essential to have diversity among content management team members. As David Watkis of Automattic explained, hiring people of different professional, cultural, and personal backgrounds, with a variety of experiences, improves a company’s collective perspective on how site content is interpreted.

          Overall, attendees felt happy to explore a topic previously not discussed in public. There was high interest in future similar events.