FAQs

forty civil and human rights organizations have come together to tell internet companies: It’s time to Change The Terms.
001

What is the goal of these policies? What problem is this solving?

Just as the internet has created immense positive value by connecting people and creating new communities, it has also given new tools to those who want to hatefully threaten, harass, intimidate, defame, or even violently attack people different from themselves. White supremacists and other organizations engaged in these sorts of hateful activities use social media and other major tech platforms to mobilize, organize, fundraise, and normalize racism, sexism, bigotry, and xenophobia. In the past few years, hate activity online has grown significantly as the “alt-right” emerged from the shadows. At the Unite the Right Rally in Charlottesville in 2017, we saw a prime example of how the internet’s invigoration of hate groups can result in real world violence.

Meanwhile, the internet has opened up unprecedented opportunities for diverse communities to speak, create, educate, and entertain by building a direct connection with their audience and provided platforms to voices that would otherwise be silenced. Yet these same creators, including people of color, women, religious minorities, and members of the LGBTQIA community, are routinely harassed and threatened online; these attacks stifle their voices and chill their participation on these platforms.

These harms interfere with the ability of entire communities to use the most important technological advance of the modern era.Some of the larger tech companies have made attempts (some more successfully than others) to address hateful activities on their services. Indeed, some attempts have been over-inclusive, silencing diverse voices combating racism and discrimination. Most tech companies are committed to providing a safe and welcoming space for all users, even if they have so far failed to follow through on that commitment. But when tech companies try to regulate content arbitrarily, without civil rights expertise, or without sufficient resources, they can exacerbate the problem.

The goal of these policies is to provide greater structure, transparency, and accountability to the content moderation that many large platforms are already undertaking. The platforms want fair policies that are effectively enforced. We want to help them manage their platforms responsibly and respectfully.

002

Who are you and why should we listen to you?

The signatories and proponents of these model policies are a coalition of civil rights, human rights, technology policy, and consumer protection organizations. The policies themselves were drafted by the Center for American Progress, Color of Change, Free Press, the Lawyers’ Committee for Civil Rights Under Law, the National Hispanic Media Coalition, and the Southern Poverty Law Center. These drafters spent approximately nine months consulting with a wide range of civil and human rights experts and technologists to try to develop a thorough yet flexible set of policies.

002

 Who are you and why should we listen to you?

The signatories and proponents of these model policies are a coalition of civil rights, human rights, technology policy, and consumer protection organizations. The policies themselves were drafted by the Center for American Progress, Color of Change, Free Press, the Lawyers’ Committee for Civil Rights Under Law, the National Hispanic Media Coalition, and the Southern Poverty Law Center.

These drafters spent approximately nine months consulting with a wide range of civil and human rights experts and technologists to try to develop a thorough yet flexible set of policies.

003

Why are you doing this?

For the past few years, civil and human rights organizations have seen hatred grow online and watched as major tech companies have repeatedly failed to adequately address the problems that their own platforms are creating. Many organizations are working hard to reduce hateful activities online and significant improvements have been made by some tech companies since Charlottesville.

But until now, there have not been a set of uniform model policies that civil and human rights organizations could point to as best practices. We hope that these new policies can set a benchmark to measure the progress of major tech companies, as well as a guide for newer companies wrestling with some of these issues for the first time.

With provisions calling for a strong appeals process, we hope these policies will help companies more effectively combat hate on the platform while avoiding silencing the users and creators combating hate online.

And we hope to use these policies to hold tech companies accountable. With this yardstick we hope to measure and report on which services and companies are best protecting their users.

004

If a company adopts these policies as its Terms of Service, what would the rules be?

Under these policies, a company commits to not allowing their services to be used for hateful activities.

  • Enforcement. The company will use the best available tools—with appropriately trained and resourced staff, technological monitoring, and civil rights expertise—to enforce the rules in a comprehensive and non-arbitrary manner.
  • Right of Appeal. The company will provide notice and a fair right of appeal to someone if their content is taken down. This is particularly important for creators of color.
  • Transparency. The company will regularly provide robust transparency reports and data so that outside groups and researchers can effectively monitor the company’s progress, study trends, and recommend improvements.
  • Evaluation and Training. The company will invest in its staff and training practices to ensure that it is providing sufficient resources to address the problem, and regularly audit its practices.
  • Governance and Authority. The company will make a clear commitment to the importance of this issue by designating a senior executive, appoint a board of directors committee, and engage a committee of external advisors all dedicated to addressing hate and discrimination on the platform.
  • State Actors, Bots, and Trolls. Recognizing that social media in particular is a new front for information warfare, the company will take affirmative steps to identify, prohibit, and disrupt those who try to conduct coordinated hateful campaigns on the service.
004

If a company adopts these policies as its Terms of Service, what would the rules be?

Under these policies, a company commits to not allowing their services to be used for hateful activities.

  • Enforcement – The company will use the best available tools—with appropriately trained and resourced staff, technological monitoring, and civil rights expertise—to enforce the rules in a comprehensive and non-arbitrary manner.
  • Right of Appeal – The company will provide notice and a fair right of appeal to someone if their content is taken down. This is particularly important for creators of color.
  • Transparency – The company will regularly provide robust transparency reports and data so that outside groups and researchers can effectively monitor the company’s progress, study trends, and recommend improvements.
  • Evaluation and Training – The company will invest in its staff and training practices to ensure that it is providing sufficient resources to address the problem, and regularly audit its practices.
  • Governance and Authority – The company will make a clear commitment to the importance of this issue by designating a senior executive, appoint a board of directors committee, and engage a committee of external advisors all dedicated to addressing hate and discrimination on the platform.
  • State Actors, Bots, and Trolls – Recognizing that social media in particular is a new front for information warfare, the company will take affirmative steps to identify, prohibit, and disrupt those who try to conduct coordinated hateful campaigns on the service.
005

What is a “hateful activity?” Will this initiative block free speech?

As defined in the model terms of service, “hateful activity” means “activities that incite or engage in violence, intimidation, harassment, threats, or defamation targeting an individual or group based on their actual or perceived race, color, religion, national origin, ethnicity, immigration status, gender, gender identity, sexual orientation, or disability.”

Because of the strict definition of hateful activity found in the terms of service, these policies will not block free speech. First, as an initial matter, the First Amendment does not apply to the policies of a private company; it only applies to actions taken by a U.S., state, or local government.

But even if it did apply, the First Amendment does not protect all speech. We carefully wrote the definition of hateful activity to cover types of speech that courts have said are not protected as free speech: incitement, violence, intimidation, harassment, threats, and defamation.

We also looked to hate crimes laws to determine what types of characteristics to protect. It is wrong to discriminate against someone based on their immutable characteristics—those personal traits that one cannot change or that are fundamental to one’s identity—such as race, color, religion, national origin, ethnicity, immigration status, gender, gender identity, sexual orientation, or disability.

006

Why not just get the government to ban hate speech and regulate these companies?

There are several legal obstacles—which serve important purposes—that prevent government action from unilaterally solving this problem, even if we wanted it to. Moreover, the tech companies who built these highly profitable platforms also created these problems as collateral damage. Those who profit from these systems should bear the burden of solving the problem.

First, the First Amendment protects free speech in theUnited States, including hate speech. American federal and state governments cannot ban hate speech. But even outside the United States, speech laws vary wildly from one country to another. It is preferable to develop a set of policies that major tech companies can apply globally so that hateful actors cannot launder their activity by routing their traffic through a different jurisdiction.

Second, in the United States, hateful activities and hate crimes are already illegal in most jurisdictions. An injured person in many cases can bring a civil lawsuit against someone who defames, harasses, or threatens them. But such one-off litigation is very slow and expensive, and sometimes you cannot identify your attacker; many marginalized communities do not have sufficient access to legal services to make this an effective strategy in most cases. Online hate is a systemic problem that needs a systemic solution.

Third, the United States gave tech companies some limited legal immunity under the Communications Decency Act. This immunity is vital to internet innovation and small startups; without it we would not have the internet as we know it today. But the trade-off is that we expect the tech companies to police their own platforms.

Finally, and perhaps most importantly, if any government began regulating online speech directly, there would be huge risks that the majority would silence and oppress the minority. Historically, censorship laws have always disproportionately silenced activists and minorities.

006

Why not just get the government to ban hate speech and regulate these companies?

There are several legal obstacles—which serve important purposes—that prevent government action from unilaterally solving this problem, even if we wanted it to. Moreover, the tech companies who built these highly profitable platforms also created these problems as collateral damage. Those who profit from these systems should bear the burden of solving the problem.

First, the First Amendment protects free speech in the United States, including hate speech. American federal and state governments cannot ban hate speech. But even outside the United States, speech laws vary wildly from one country to another. It is preferable to develop a set of policies that major tech companies can apply globally so that hateful actors cannot launder their activity by routing their traffic through a different jurisdiction.

Second, in the United States, hateful activities and hate crimes are already illegal in most jurisdictions. An injured person in many cases can bring a civil lawsuit against someone who defames, harasses, or threatens them. But such one-off litigation is very slow and expensive, and sometimes you cannot identify your attacker; many marginalized communities do not have sufficient access to legal services to make this an effective strategy in most cases. Online hate is a systemic problem that needs a systemic solution.

Third, the United States gave tech companies some limited legal immunity under the Communications Decency Act. This immunity is vital to Internet innovation and small startups; without it we would not have the Internet as we know it today. But the trade-off is that we expect the tech companies to police their own platforms.

Finally, and perhaps most importantly, if any government began regulating online speech directly, there would be huge risks that the majority would silence and oppress the minority. Historically, censorship laws have always disproportionately silenced activists and minorities.

007

Are you worried that these policies will be used to silence minorities online?

Yes, we are always worried that any policy could have disparate impacts on minority groups. But we have to look realistically at the situation we are facing today.

First, minorities are already receiving disparately discriminatory treatment from many tech companies. Many—if not most—tech companies are already creating and enforcing rules about what type of content is allowed on their platforms. In our experience, these rules are often poorly written or inadequately enforced.They often only respond when someone affirmatively reports a violation (sometimes trolls abuse these systems by flagging legitimate content).

And the enforcement teams are often understaffed, under-resourced, and poorly trained. In many cases, the appeals policies are insufficient to allow minorities whose speech does not constitute hateful activity to get their content back on-line.  All of these factors cause the current policies and practices to disproportionately silence the speech of minorities.

Second, abstaining entirely from content moderation results in even more hate and discrimination targeted at marginalized communities. The platforms that historically have taken the most permissive approach to content (such as 4chan, Twitter, and—until recently—Reddit) are also the places where the worst conduct occurs. A Wild West internet is not the answer.

Our goal is to help tech companies develop policies and best practices—with input from civil and human rights experts—that appropriately moderate their platforms while avoiding discriminatory impacts. These policies are a living document, intended to reflect the most effective policies and practices to combat hate activities while protecting minority voices.

008

What companies do you want to adopt these policies?

These policies are intended for internet companies that provide the following types of services:

  • Social media, video sharing, communications, marketing, event scheduling, or ticketing
  • Online advertising
  • Financial transactions or fundraising
  • Public chat or group communications
  • Domain names
  • Building or hosting websites, blogs, or message boards

This policies are not intended to be used by Internet Service Providers (e.g., Comcast or AT&T). We are committed to an open internet. Nothing in these policies is intended to allow or support blocking, throttling, or prioritizing any lawful content by an Internet Service Provider.

008

What companies do you want to adopt these policies?

These policies are intended for Internet companies that provide the following types of services:

  • Social media, video sharing, communications, marketing, event scheduling, or ticketing
  • Online advertising
  • Financial transactions or fundraising
  • Public chat or group communications
  • Domain names
  • Building or hosting websites, blogs, or message boards

This policies are not intended to be used by Internet Service Providers (e.g., Comcast or AT&T).We are committed to an open Internet. Nothing in these policies is intended to allow or support blocking, throttling, or prioritizing any lawful content by anInternet Service Provider.

009

How will you measure whether a company is complying with these policies?

We intend to continue to engage directly with tech companies, and begin engagement with companies we have not worked with previously. We are happy to sit down with them at any time to discuss the policies and any concerns they may have. We will be actively encouraging tech companies to adopt and implement these policies.

After we have given tech companies a reasonable amount of time to digest, adopt, and implement these policies, we will initiate an evaluation process to see how companies stack up. This process will measure both tech companies’ formal policies and how they implement them. Without effective and evenhanded execution, a formal policy is just empty words.

010

Technology changes quickly. Are you worried these policies won’t keep up?

These model policies are meant to be a living document. This is Version 1.0. We fully expect that they will need evaluation and revision going forward. We will learn lessons from their implementation and online behavior will evolve. We hope that the transparency and reporting procedures in these policies will independent researchers and the public the data needed to figure out what works and what does not. And then we will revise.

In addition, we recognize that these are model policies and that every company has a different architecture and business model. We have tried to write these policies in a flexible manner so that companies can adapt their execution to the structure of their services while maintaining some baseline expectations for fairness, transparency, and consumer protection.

011

AI and machine learning algorithms are often infected with bias and produce discriminatory results.

Why would you want tech companies to use them as part of their enforcement practices?

We believe it is unrealistic to expect human reviewers to adequately monitor most large tech platforms. The volume of data is just too large. Without technological assistance, under-enforcement would be rampant. The human reviewers would always be swamped with too much content to review and only the most egregious or loudest cases would be addressed. That would allow hateful activities to flourish on these services.

But when tech companies use software to help monitor for hateful activities on their platforms, we recognize the risks of algorithmic bias and require the companies to take precautions to avoid it. All enforcement decisions must involve a human. Companies need to design and test their algorithms for bias, and routinely audit them even after deployment.

010

Technology changes quickly. Are you worried these policies won’t keep up?

These model policies are meant to be a living document. This is Version 1.0. We fully expect that they will need evaluation and revision going forward. We will learn lessons from their implementation and online behavior will evolve. We hope that the transparency and reporting procedures in these policies will independent researchers and the public the data needed to figure out what works and what does not. And then we will revise.

In addition, we recognize that these are model policies and that every company has a different architecture and business model. We have tried to write these policies in a flexible manner so that companies can adapt their execution to the structure of their services while maintaining some baseline expectations for fairness, transparency, and consumer protection.

012

Will tech companies be able to use these policies to further invade my privacy, anonymity, or use of encryption?

No. Tech companies should not be allowed to use these policies as an excuse to invade their users’ privacy, strip them of their anonymity, or undermine the security and privacy of encrypted messaging services. Hateful activities can be reduced while respecting consumers’ rights.

Internet companies must ensure that their efforts are tailored to the mission of addressing hateful activities, and do not inappropriately invade users’ privacy, profile users based solely on their identity or affiliations, or initiate investigations solely based on offensive speech that does not qualify as hateful activities.

We intend these policies to be flexible depending on the nature of the service the tech company is providing. For some companies, requiring an authentic identity is part of the structure of the platform. For others, their users value anonymity. When tech companies are structuring their practices to implement these policies, they will need to take into account the reasonable expectations of their users. However, a commitment to anonymity cannot be a reason to not address hateful activities. Similarly, a commitment to users disclosing who they are has not in and of itself stopped these kinds of hateful activities on social media platforms.

012

WILL TECH COMPANIES BE ABLE TO USE THESE POLICIES TO FURTHER INVADE MY PRIVACY, ANONYMITY, OR USE OF ENCRYPTION?

No. Tech companies should not be allowed to use these policies as an excuse to invade their users’ privacy, strip them of their anonymity, or undermine the security and privacy of encrypted messaging services. Hateful activities can be reduced while respecting consumers’ rights.

Internet companies must ensure that their efforts are tailored to the mission of addressing hateful activities, and do not inappropriately invade users’ privacy, profile users based solely on their identity or affiliations, or initiate investigations solely based on offensive speech that does not qualify as hateful activities.

We intend these policies to be flexible depending on the nature of the service the tech company is providing. For some companies, requiring an authentic identity is part of the structure of the platform. For others, their users value anonymity.

When tech companies are structuring their practices to implement these policies, they will need to take into account the reasonable expectations of their users. However, a commitment to anonymity cannot be a reason to not address hateful activities. Similarly, a commitment to users disclosing who they are has not in and of itself stopped these kinds of hateful activities on social media platforms.

013

WILL GOVERNMENT AGENCIES BE ABLE TO USE THESE POLICIES TO CONTROL FREE SPEECH?

No. Tech companies should not allow government actors to use their tools to attempt to remove content they find objectionable. Governments have other means by which to address content concerns through the usual legislative, judicial, or regulatory mechanisms that are (or should be)accountable to their constituency.

For instance, in the United States there are strong restrictions on what speech can be limited by government and the requirement for due process prior to such limitations. Nothing in these policies should be interpreted to grant additional authority to government or to grant government actors extrajudicial influence over tech companies’ content.

do more