The Fight for the Future of YouTube

By Neima Jahromi, newyorker.com

… because YouTube offers popular video producers a cut of ad revenue, the company had implicitly condoned Crowder’s messages.

YouTube has the scale of the entire Internet,” … The site now attracts a monthly audience of two billion people and employs thousands of moderators. Every minute, its users upload five hundred hours of new video. The technical, social, and political challenges of moderating such a system are profound. They raise fundamental questions not just about YouTube’s business but about what social-media platforms have become and what they should be.

…so called borderline content, which dances at the edge of provocation, is harder to detect and draws a broad audience. Machine-learning systems struggle to tell the difference between actual hate speech and content that describes or contests it.

Some automated systems use metadata—information about how often a user posts, or about the number of comments that a post gets in a short period of time—to flag toxic content without trying to interpret it. But this sort of analysis is limited by the way that content bounces between platforms, obscuring the full range of interactions it has provoked.

YouTube also relies on anonymous outside “raters” to evaluate videos and help train its recommendations systems. But the flood of questionable posts is overwhelming, and sifting through it can take a psychological toll.

…the problem is that, for many videos, “explicit feedback is extremely sparse” compared to “implicit” signals, such as what users click on or how long they watch a video. Teen-agers, in particular, who use YouTube more than any other kind of social media, often respond to surveys in mischievous ways.

Business challenges compound the technical ones. In a broad sense, any algorithmic change that dampens user engagement could work against YouTube’s business model. 

Netflix, which is YouTube’s chief rival in online video, can keep subscribers streaming by licensing or crafting addictive content; YouTube, by contrast, relies on user-generated clips, strung together by an automated recommendation engine.

the engine is designed to “dig into a topic more deeply,” luring the viewer down the proverbial rabbit hole. Many outside researchers argue that this system, which helped drive YouTube’s engagement growth, also amplified hate speech and conspiracy theories on the platform. 

Political provocateurs can take advantage of data vacuums to increase the likelihood that legitimate news clips will be followed by their videos. And, because controversial or outlandish videos tend to be riveting, even for those who dislike them, they can register as “engaging” to a recommendation system, which would surface them more often.

In fact, the apparent democratic neutrality of social-media platforms has always been shaped by algorithms and managers.

By spotlighting its most appealing users, the platform attracted new ones. It also shaped its identity: by featuring some kinds of content more than others, the company showed YouTubers what kind of videos it was willing to boost.

The question of YouTube’s valueswhat they are, whether it should have them, how it should uphold them—is fraught.

The U.N. is not just a conference center that convenes to hear any perspective offered by any person on any issue,” Mokhiber said. Instead, he argued, it represents one side in a conflict of ideas.

it also helps fund school programs designed to improve students’ critical-thinking skills when they are confronted with emotionally charged videos

And yet, on a platform like YouTube, there are reasons to be skeptical about the potential of what experts call “counterspeech.” 

“”If we frame hate speech or toxicity as a free-speech issue, then the answer is often counterspeech,” she explained.

But, to be effective, counterspeech must be heard. “Recommendation engines don’t just surface content that they think we’ll want to engage with—they also actively hide content that is not what we have actively sought,” Hemphill said. “Our incidental exposure to stuff that we don’t know that we should see is really low.” It may not be enough, in short, to sponsor good content; people who don’t go looking for it must see it.

Theoretically, YouTube could fight hate speech by engineering a point-counterpoint dynamic. In recent years, the platform has applied this technique to speech about terrorism, using the “Redirect Method”: moderators have removed terrorist-recruitment videos while redirecting those who search for them to antiterror and anti-extremist clips.

A YouTube representative told me that it has no plans to redirect someone who searches for a men’s-rights rant, say, to its Creators for Change–sponsored feminist reply. Perhaps the company worries that treating misogynists the way that it treats ISIS would shatter the illusion that it has cultivated an unbiased marketplace of ideas.

One way to make counterspeech more effective is to dampen the speech that it aims to counter.

He suggested that YouTube could suppress toxic videos by delisting them as candidates for its recommendation engine—in essence, he wrote, this would “shadowban” them.(Shadow-banning is so-called because a user might not know that his reach has been curtailed, and because the ban effectively pushes undesirable users into the “shadows” of an online space.) Ideally, people who make such shadow-banned videos could grow frustrated by their limited audiences and change their ways;

Shadow-banning is an age-old moderation tool: the owners of Internet discussion forums have long used it to keep spammers and harassers from bothering other users. On big social-media platforms, however, this kind of moderation doesn’t necessarily focus on individuals; instead, it affects the way that different kinds of content surface algorithmically.

In April, Ted Cruz held a Senate subcommittee hearing called “Stifling Free Speech: Technological Censorship and the Public Discourse.” In his remarks, he threatened the platforms with regulation; he also brought in witnesses who accused them of liberal bias. (YouTube denies that its raters evaluate recommendations along political lines, and most experts agree that there is no evidence for such a bias.)

In his testimony, Kontorovich pondered whether regulation could, in fact, address the issue of bias on search or social platforms. “Actually enforcing ideological neutrality would itself raise First Amendment questions,” he said. Instead, he argued, the best way to address issues of potential bias was with transparency. A technique like shadow-banning might be effective, but it would also stoke paranoia. From this perspective, the clarity of the Creators for Change program adds to its appeal: its videos are prominently labelled.

Engineers at YouTube and other companies are hesitant to detail their algorithmic tweaks for many reasons; among them is the fact that obscure algorithms are harder to exploit.

YouTube has claimed that, since tweaking its systems in January, it has reduced the number of views for recommended videos containing borderline content and harmful misinformation by half. Without transparency and oversight, however, it’s impossible for independent observers to confirm that drop. “Any supervision that’s accepted by society would be better than regulation done in an opaque manner, by the platforms, themselves, alone,” Abiteboul said.

Before YouTube, he had worked at a Web site that showcased shocking videos and images—gruesome accidents, medical deformities. There he saw how such material can attract a niche of avid users while alienating many others. “Bikinis and Nazism have a chilling effect,” he said. YouTube sought to distinguish itself by highlighting more broadly appealing content. It would create an ecosystem in which a large variety of people felt excited about expressing themselves.

“We thought, if you just quarantine the borderline stuff, it doesn’t spill over to the decent people,” he recalled. “And, even if it did, it seemed like there were enough people who would just immediately recognize it was wrong, and it would be O.K.” 

The events of the past few years have convinced Schaffer that this was an error. The increasing efficiency of the recommendation system drew toxic content into the light in ways that YouTube’s early policymakers hadn’t anticipated. In the end, borderline content changed the tenor and effect of the platform as a whole. “Our underlying premises were flawed,” Schaffer said. “We don’t need YouTube to tell us these people exist. And counterspeech is not a fair burden. Bullshit is infinitely more difficult to combat than it is to spread. YouTube should have course-corrected a long time ago.”

Some experts point out that algorithmic tweaks and counterspeech don’t change the basic structure of YouTube—a structure that encourages the mass uploading of videos from unvetted sources. It’s possible that this structure is fundamentally incompatible with a healthy civic discourse.

“I wish they would recognize that they already do pick winners,” she said. “Algorithms make decisions we teach them to make, even deep-learning algorithms. They should pick different winners on purpose.”Schaffer suggested that YouTube’s insistence on the appearance of neutrality is “a kind of Stockholm syndrome. I think they’re afraid of upsetting their big creators, and it has interfered with their ability to be aggressive about implementing their values.”

The Creators for Change program is open about its bias and, in that respect, suggests a different way of thinking about our social platforms. Instead of aspiring, unrealistically, to make them value-neutral meeting places—worldwide coffee shops, streaming town squares—we could see them as forums analogous to the United Nations: arenas for discussion and negotiation that have also committed to agreed-upon principles of human dignity and universal rights

The U.N., too, has cultivated celebrity “ambassadors,” such as David Beckham, Jackie Chan, and Angelina Jolie. Together, they promote a vision of the world not merely as it is—a messy place full of violence and oppression—but as we might like it to be. Perhaps this way of conceptualizing social platforms better reflects the scale of the influence they wield.

Brown also said that he recognized the complexity created by YouTube’s global reach. “YouTube is trying to moderate and keep their values while trying to make money off, literally, the entire world,” he said. At the same time, he continued, “Being able to educate people on this huge scale is so important to me. I still feel that YouTube is the best place to do it.”

Steve Chen, the YouTube co-founder, recalled the platform’s early days, when he often intervened to highlight videos that could only be found on YouTube and that the algorithm might not pick up. He might place amateur footage of the destruction wrought by Hurricane Katrina, or documentation of a racist attack at a bus stop in Hong Kong, on the home page. “I remember a video that only had twenty-five views,” he told me. “I said, ‘There’s no way that this piece of content from some photographer in Africa would ever be viewed from around the world without YouTube, so I’m going to feature this.’ ” 

Creators for Change, he explained, sounded like a “more evolved version of what I was trying to do on the home page—trying to showcase, encourage, and educate users that this is the kind of content that we want on YouTube.” But the technological developments that have made the platform’s global reach possible have also made such efforts more difficult. 

“Regardless of the proportions or numbers of great content that YouTube wants on the site, it’s the way the algorithm works,” he said. “If you’re watching this type of content, what are the twenty other pieces of content, among the billions, that you’re going to like most? It probably won’t be choosing out of the blue.” 

Chen, who left YouTube a decade ago, told me that he doesn’t envy the people who have to decide how the system should work. “To be honest, I kind of congratulate myself that I’m no longer with the company, because I wouldn’t know how to deal with it,” he said.

Brown, for his part, wanted the platform to choose a point of view. But, he told me, “If they make decisions about who they’re going to prop up in the algorithm, and make it more clear, I think they would lose money. I think they might lose power.” He paused. “That’s a big test for these companies right now. How are they going to go down in history?”

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.