Federal Awards for “Mis-, Dis-, and Malinformation” and other content moderation initiatives, 2010-2025
At vero eos et accusamus
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
Ready to go? View the database.
liber-net has built a searchable database of over 850 awards issued by the U.S. federal government covering the topics of “mis-, dis-, or malinformation” (MDM) and other content moderation initiatives from 2010 to 2025. These awards have each been manually reviewed, with the intention of providing nuance and an alternative to the fishing-with-dynamite style of attempted administrative reforms.
Not everything in this database can be labeled a “censorship initiative.” Rather, the database tells the story of how the U.S. Federal Government became the lead player in developing the anti-disinformation field, skewing funding towards top-down, expert-driven, content moderation as the solution to an over-supply of sometimes-unreliable information. Among the many problems with this approach is the assumed omnipotence of those doing the moderation, to say nothing of the weaponization of “countering-disinformation” to attack political enemies.
The database includes a wide range of awards: from projects that sought to actively remove (or scale systems for removing) content from the Internet or report the results to officials, to programs that sought to leverage machine learning to detect “deepfakes” with no obvious plans to tilt the political scales. While the latter project might in fact be a value-add, or at least neutral, both sit within the broader “anti-disinformation” ideology and are documented here.
In our review, we deliberately excluded innocuous grants, such as those not linked to recent political controversies and awards with language indicating they were not part of the dominant anti-disinformation ideology. An NIH grant to study “E-cigarette-related nicotine misinformation on social media,” for example, is an example of an award that was not included in our database.
We reviewed over 1,100 grants indexed on public funding databases including usaspending.gov (the main database for tracking historical grant, loan, and contract data), grants.gov (which displays current opportunities), sam.gov (a website tracking registration records, which include contract award data), the Federal Audit Clearinghouse (a repository of standardized Single Audits from organizations receiving federal funds,) and a variety of agency-specific award databases such as nsf.gov/awardsearch/ (NSF-specific), reporter.nih.gov (NIH-specific), defense.gov/News/Contracts (DOD-specific), and foreignassistance.gov (State Department/USAID-specific).
We primarily used keyword searches to uncover MDM and other information control initiatives revealing a host of universities, NGOs, and private actors undertaking content-flagging and moderation activity, education programs, and surveillance. Using a relational database system, we compiled information about both the grant and the recipient, the federal government agency source, funding amounts, types of activity funded, a rating, dates, relevant links, base country, and more.
While this database includes grants and contracts as far back as 2010, our primary focus was those from 2016 to the present day. This was based on the hypothesis that the anti-disinformation field rapidly expanded in the wake of the Trump election and the Brexit referendum; our findings confirm this hypothesis. If, or as, more information emerges we will continue to add awards to the database as far back as 2008.
We note that nearly 150 MDM-related grants, mostly from the State Department, were awarded to “Miscellaneous Foreign Awardees.” Through some diligent and creative searching we were able to find names for a handful of these award recipients. However, most remain anonymous. It could be the case that this title is used when there are multiple organizations involved in the project; alternately, it could be the case that the State Department may not want the names of some grant recipients to be made public for any number of reasons.
As of today, there remain over 140 active federal awards for MDM and other content moderation initiatives. It is very much a live issue, as many of these awards continue to be paused or canceled. We are doing our best to track and update the database accordingly. Such historical documentation is important. With the recent political changes in the U.S., much of the content control funding is shifting to the U.K., the E.U., and private philanthropy. Knowing who the past key players were helps us identify possible new protagonists.
Rating system
In order to appropriately contextualize and filter such a large dataset, we set in place a rubric for rating all awards by how egregiously they appear to violate principles of digital rights and free expression, on a scale of one (least) to five (most) red flags. The criteria are as follows:
- Projects that pursued takedowns of Internet content, or flagging with intent to have content removed,
- Initiatives that included a high level of personal surveillance or privacy violations, such as tracking individual location data or collecting personal information,
- Projects with strong government collaboration, particularly with military, police, and intelligence services,
- Content-flagging and “fact-checking” initiatives, particularly those that assumed a top-down or expert class monopoly on truth,
- Projects that worked in real-time to instantaneously rank, evaluate, or respond to content, as they sought to scale the speed of surveillance and response that violated privacy or assumed they could achieve god-like knowledge,
- Projects with a high level of automation that placed undue faith in the competency of their algorithms, and as they sought to scale often biased moderation to a whole new level, or
- Projects that set out to protect authorities from legitimate criticism or critique by labeling such criticism as “disinformation” or “harassment.”
Additional information
An article further outlining the work, including visualizations, can be found here. A report by The Free Press featuring our research was published on April 16, 2025.
We will continue to update the database over the coming months. We seek to be as accurate as possible; if you find errors or awards that are missing, please contact us.
If you are an academic interested in working with this data, please reach out. As a counterbalance to the many journals and Internet Studies researchers publishing papers promoting anti-disinformation programming and top-down content controls, we hope there might be an appetite for a critical meta-analysis of the sector.