The federal government is amassing a network of artificial intelligence and machine learning capabilities through a series of federal grants to researchers and businesses, according to a report at the Federalist.
The report states that through more than 500 federally funded contracts or grants since 2020, the government is seeking to take a firm grasp on “misinformation” and “disinformation,” using AI and ML to tune in to internet chatter.
The Federalist reports that the systems will have the ability to identify the origins of what the government deems to be threatening messages or hate speech in real time. Therefore, it would be possible to prevent amplification of the speech before any unapproved messaging goes viral across internet platforms.
Some of the companies receiving federal funding include websites like NewsGuard, which received $750,000 from the Small Business Innovation Research Center to develop its “Fingerprints” program, which the company describes as “tracking disinformation campaigns with human intelligence and AI.”
Another company, PeakMetrics, has received $1.5 million from the Department of Defense. PeakMetrics tracks over 1.5 million “news sources, blogs, social networks, podcasts, TV/radio, [and] email newsletters” and received money from the Air Force to “create technology for rapid assessment and quantification of disinformation for DoD operators.”
Omelas Inc. received over $1 million in federal funding for “research and development.” The company describes its work as analyzing the most “influential newspapers, TV channels, government offices, militant groups, and more across a dozen social networks and messaging apps, thousands of websites, and thousands of RSS feeds,” including in Russia, Iran, and China.
A company called Primer Technologies was awarded $3 million for what is described as “social media event monitoring.” The company’s document titled “The Strategic Imperative of AI to Speed Up Decision Cycles” outlines how its AI technology can assess online conversations. It can help identify “entities, relationships & locations” of those in the discussion, identify the “sentiment” in multiple languages, curate relevant images related to the topic, and identify opposing narratives and who is saying them.
“Primer alerts [users] to hard-to-find connections such as evolving sentiment across multi-lingual streaming data, and bot-amplified narratives, an indicator for potential disinformation. Command allows [the user] to track and categorize threats surfacing and clustering images and narratives for reporting and further analysis,” the document reads.
Tracking of online conversations and large, sweeping databases of news-related information, in conjunction with AI recognition of speech, appear to be main focus of the selected government programs, which are often procured by military entities such as the U.S. Air Force or Navy.
Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!