Public Comment to the U.S. Federal Trade Commission on Technology Platform Censorship

At vero eos et accusamus

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.

The United States federal government has invited public comment to better understand how technology platforms “deny or degrade” users’ access to services based on the content of the users’ speech or their affiliations, including activities that take place outside the platform. Below is our letter to the Federal Trade Commission stating our positions on technology platform censorship:

 

TO: Federal Trade Commission

FROM: liber-net

DATE: May 21, 2025

SUBJECT: Re: Request for Public Comment Regarding Technology Platform Censorship 

 

liber-net would like to provide the Federal Trade Commission information related to its inquiry in question 5(b) relating to government involvement in technology platform censorship.

 

About liber-net

liber-net is a digital civil liberties initiative working to re-establish free speech and civil liberties as the default standard for our networked age. We are an initiative concerned about corporate and government censorship and a civil society that has discreetly pushed for and developed speech controls under the guise of combating “mis-, dis-, and malinformation (MDM).” 

liber-net seeks to enable free speech, both offline and online and support technologies which facilitate both individual agency, collective endeavor, and the free exchange and circulation of ideas. We accomplish this through writing, research and publication, media interventions, campaigning, events and network building. Our recent research has uncovered the extent of federal funding and involvement of content moderation as well as developing policy proposals to reverse these actions. We published the former as a searchable database and the latter as a policy paper. We are pleased to share this work with the Federal Trade Commission as it seeks to understand how consumers have been harmed by technology platform policies. 

 

Introduction
It is increasingly clear that the U.S. federal government had developed a large-scale system to coordinate the suppression of its citizens’ First Amendment-protected speech. Evidence supporting this assertion comes from the Twitter Files, discovery evidence in the combined cases of Murthy v. Missouri and Kennedy v. Biden, and the U.S. House Judiciary Committee on the Weaponization of the Federal Government. This body of evidence has demonstrated the existence of a network of nongovernmental organizations (NGOs), academic institutions, think tanks, and major technology companies often working directly with, or under pressure from the government to control flows of information and censor online content. This network is sometimes referred to as the “Censorship-Industrial Complex.”

On August 27, 2024, Meta CEO Mark Zuckerberg issued a statement confirming that the Biden administration pressured Meta to censor First Amendment-protected speech relating to Covid and the Hunter Biden laptop story. In the case of Covid, Zuckerberg revealed that the Biden White House “repeatedly pressured our teams for months to censor certain Covid content, including humor and satire,” and expressed regret that Facebook complied by suppressing the New York Post story about Hunter Biden’s laptop after receiving a warning from the Federal Bureau of Investigation (FBI) regarding a “Russian disinformation campaign.” Beginning in January 2025, Meta moved to roll back some of their speech controls on the platform, including severing relationships with third-party fact-checkers; liber-net issued a statement welcoming these developments and suggesting further reforms.

In addition to direct FBI pressure, 50 former national security officials claimed the Hunter Biden story was a “Russian information operation” in an effort to discredit it. A host of major media outlets, NGOs, and fact-checkers repeated this assertion in lockstep. The story was quickly suppressed on platforms like Twitter and Facebook, potentially influencing the outcome of the Presidential election, which was mere weeks away. 

Federal officials engaged in similar behavior through the entirety of Covid. Multiple leaked documents revealed the government placing direct pressure on social media platforms to censor online speech, including seeding or supporting academic and NGO consortiums to act as proxies to hide this government pressure. Perhaps the best-known example is the Virality Project, an endeavor initiated by the Department of Homeland Security (DHS) and led by the Stanford Internet Observatory (SIO). This project pushed for the censorship of academics and individuals who disagreed with  policies of the Centers for Disease Control (CDC), even advising major social media partners to label true stories of vaccine side effects as “misinformation.”

Unfortunately, these high-profile incidents are likely just a small fraction of instances where the U.S. government has put its thumb on the scales to influence content moderation decisions on private platforms. While government officials have claimed that they are merely using their own speech capacities to make policy, their pressure campaigns are implicitly backed by the threat of using broader regulatory or legislative powers to bring companies to heel if they do not comply.

Open discourse is the central pillar of a free society, essential for holding governments accountable and fundamentally protecting and empowering vulnerable groups. Protections for individual speech and expression apply not only to views we agree with but also to those we strongly oppose. The Supreme Court has repeatedly ruled in First Amendment cases that the “government has no power to restrict expression because of its message, its ideas, its subject matter, or its content,” and has explained that “if there is any fixed star in our constitutional constellation, it is that no official, high or petty, can prescribe what shall be orthodox in politics, nationalism, religion, or other matters of opinion.” Indeed, as Justice Breyer noted, “it is perilous to permit the state to be the arbiter of truth,” even when such truth can actually be established (United States v. Alvarez, 567 U.S. 709, 731–32) – to say nothing of cases when truth, in fact, cannot. 

Notably, the First Amendment specifies not only government prohibition of speech, but the mere abridging thereof. The writers of the U.S. Constitution and Bill of Rights deeply understood that the ability to freely speak, write, and publish is a foundational element of human nature and is therefore an inalienable right. The protections they enshrined in the First Amendment remain some of the strongest bulwarks against authoritarian censorship and tyranny ever devised.

The broad pattern of government overreach we have witnessed over recent years demonstrates a strong need to reinvigorate free speech protections across the United States. liber-net is pleased to share a series of ideas for how the federal government might achieve this goal, complimenting our recent research into digital free speech-related legislation at the State level.

 

Our Federal MDM Funding Database

liber-net has built a searchable database of nearly 900 awards issued by the U.S. federal government covering the topics of “mis-, dis-, or malinformation” (MDM) and other content moderation initiatives from 2012 to 2025. These awards have each been manually reviewed, with the intention of providing nuance and an alternative to the fishing-with-dynamite style of attempted administrative reforms.

Not everything in this database can be labeled a “censorship initiative.” Rather, the database tells the story of how the U.S. Federal Government became the lead player in developing the anti-disinformation field, skewing funding towards top-down, expert-driven, content moderation. Among the many problems with this approach is the assumed omnipotence of those doing the moderation, to say nothing of the weaponization of “countering-disinformation” to attack political opponents.

The database includes a wide range of awards: from projects that sought to actively remove (or scale systems for removing) content from the Internet or report the results to officials, to programs that sought to leverage machine learning to detect “deepfakes” with no obvious plans to tilt the political scales.

Our methodology involved reviewing over 1,100 grants indexed on public funding databases including usaspending.gov (the main database for tracking historical grant, loan, and contract data), grants.gov (which displays current opportunities), sam.gov (a website tracking registration records, which include contract award data), the Federal Audit Clearinghouse (a repository of standardized Single Audits from organizations receiving federal funds,) and a variety of agency-specific award databases such as nsf.gov/awardsearch/ (NSF-specific), reporter.nih.gov (NIH-specific), defense.gov/News/Contracts (DOD-specific), and foreignassistance.gov (State Department/USAID-specific).

We primarily used keyword searches to uncover MDM and other information control initiatives revealing a host of universities, NGOs, and private actors undertaking content-flagging and moderation activity, education programs, and surveillance. Using a relational database system, we compiled information about both the grant and the recipient, the federal government agency source, funding amounts, types of activity funded, a rating, dates, relevant links, base country, and more.

While this database includes grants and contracts as far back as 2010, our primary focus was those from 2016 to the present day. This was based on the hypothesis that the anti-disinformation field rapidly expanded in the wake of the Trump election and the Brexit referendum; our findings confirm this hypothesis. 

 

Proposals to End Government Coercion of Social Media Platforms and Restrictions on Legal Speech

Much of the justification for government censorship has occurred under the rubric of countering so-called  “mis-, dis-, and malinformation” (MDM) and “hate speech.” An immediate step should be to remove the concept of “malinformation” from all government documents and policies. This concept is deeply flawed, as it often involves factual or true information presented in a context that is inconvenient for another social actor or interest group.

The concepts of “misinformation” and “disinformation,” should only come under the purview of the U.S. government when they clearly involve defamation, fraud, criminal activity, or large-scale foreign interference operations. Even then, caution is essential, as recent years have seen legitimate domestic dissent frequently labeled as “Russian disinformation” or other delegitimizing terms as a pretext for censorship. Indeed, the First Amendment creates “breathing space,” protecting hyperbole and even false statements “inevitable in free debate.” This applies even to deliberately false statements said with “actual malice,” as the Constitution does not allow for prosecutions for libel on the government as an entity.  (New York Times Co. v. Sullivan, 376 U.S. 254 [1964]).

The second key justification for censorship has been claiming the need to counter “hate speech,” an inherently subjective concept. Courts have ruled that restrictions on hate speech would conflict with the First Amendment’s protection of the freedom of expression, and thus “hate speech” receives constitutional protection. The federal government cannot and should not police “hate speech,” except in limited cases of true threats, incitement to imminent lawless action, discriminatory harassment, or defamation. In all other cases, it is simply not within the purview of the government to police legal online speech. We do not mean to suggest that hate speech is not problematic in online spaces, but rather that it is not the government’s role to act as an arbiter of such speech. Other, more creative modalities that do not infringe upon the First Amendment need to be employed to tackle these challenges.

Unfortunately, the federal government has engaged in unprecedented attempts to control information flows and public opinion over the past few years, attempting to circumvent First Amendment limitations via the use of NGO, academic and think tank intermediaries who present themselves as research or policy initiatives but in fact frequently flag content to social media platforms for labeling and removal. 

Allowing free and rigorous debate within the marketplace of ideas, where disfavored speech is countered by favored speech, is the only acceptable – albeit, imperfect – alternative to centralized control of information. The Federal Trade Commission should prioritize reversing the digital speech restrictions and policies implemented in the past by:

  1. Declaring that it will be the policy of the FTC to uphold First Amendment-protected speech across all of its Bureaus, policy making activities, enforcement actions, etc. 
  2. Promulgate regulations to limit the ability of Interactive Computer Service (“ICS”) companies from removing users, label content as misinformation, or share user data, except in specific cases like criminal investigations or threats to public safety
  3. Pursue legal actions and policies to promote the concept that the government violates the First Amendment when it privately solicits a third party to remove another person’s lawful political speech from an online platform, and move legal precedent away from a focus on the line between “persuasion” and “coercion,” since requests from government officials are inherently intimidating

Special attention should also be paid to Section 230 of the Communications Act, a critical law that generally protects ICS’ like social media companies from liability for user-generated content on their platforms. This legal shield allows companies like Meta (Facebook), Twitter (now X), and Google (YouTube) to host a vast range of content without liability for defamation, unlike news outlets, while simultaneously permitting them to moderate content by removing what they determine to be harmful or inappropriate without being liable for censorship claims. Social media companies as we know them would likely not exist without the delicate balance created by this section of law and the jurisprudence it has inspired.

Social media companies’ use of Section 230 has sparked heated debate over the past decade or so, as the role of these influential companies in controlling the flow of news and information has grown. Some critics have argued that these platforms have overreached in moderating content by removing posts or banning users without transparency, acts which potentially violate the spirit, if not the letter of Section 230. Others believe platforms should bear more responsibility, especially when their services are used to spread disfavored information. This has led to calls for reform from across the partisan spectrum, with proposals ranging from limiting the immunity provided by Section 230, to enhancing transparency in content moderation practices, to repealing the law altogether.

Should social media companies wish to continue curating the content on their platforms (as is their right), then the FTC should consider antitrust options to ensure that diverse viewpoints can be shared across a range of channels. While Section 230 has been vital in fostering the growth of social media, its future may include reforms aimed at addressing the complexities of moderating speech in the digital age. 

The following options should be considered as these changes are debated:

  1. The FTC should collaborate with the National Institute of Standards and Technology (NIST), and the Department of Justice (DOJ) to produce a collaborative report evaluating antitrust proposals that could break up large social media and technology companies and reset the market so as to create space for a wider range of platforms that could better express the diversity of viewpoints within the country
  2. To the extent currently possible, encourage or require all large social media companies to submit to the FTC on a bi-annual basis detailed reports which the FTC shall make public 30 days after receipt, which must include: 
    • Descriptions of their content management and terms of use policies; 
    • Use of third party fact-checking organizations;
    • Who or what employees or systems are used to enforce their policies; and,
    • Detailed, fully de-identified information on the number of moderated or affected posts, the number of impacted individual users, and justifications for any actions taken.
  3. Create an office within the FTC with the unique authority to enforce all new laws, regulations, and reporting requirements relating to large social media companies.

Conclusion

The previous  administration developed and unleashed a wide scale government-sponsored censorship effort. This modern, savvy system of information controls has been led by agencies and offices from DHS/CISA to the NSF, FBI, the White House, and the National Institutes of Health. Over the past few years, U.S. government funding of various projects to counter mis-and disinformation (“MDM”) has surged into the hundreds of millions of dollars, with federal money seeping into a variety of NGOs, for-profit government contractors, and nearly every corner of academia.

After years of tension, it is now widely acknowledged that the counter-MDM movement and “fact-checking” campaigns to censor and shape the speech of U.S. citizens violated core American values and norms of free expression. We believe the Federal Trade Commission is in a unique position to reverse some of the harmful information suppression initiatives of the previous administration. We thank the Commission for its consideration on this important matter.

 

Sincerely,

Andrew Lowenthal

CEO, liber-net

Share

Network Affects Substack.

Led by liber-net founder Andrew Lowenthal, NetworkAffects explores digital authoritarianism - privacy threats, bio-metric ID, surveillance, programmable currencies, and attacks on digital civil liberties and free expression from the ‘anti-disinformation’ and ‘fact-checking’ fields.

Sign Up for Our
Newsletter

Enter your email address to subscribe