AI chatbots might be giving false answers, propaganda. Culprit: Russia

Written by Amy Ta, produced by Angie Perrin

A graphic shows a person communicating with an AI chatbot Photo credit: Sanket Mishra/Pexels

When you ask Chat GPT or another AI a question, how do you know the answer is accurate? Or propaganda from a foreign government. Russian disinformation is now shaping some answers from major AI chatbots operated by OpenAI, Meta, Google, and Microsoft. Online security experts say a Kremlin-linked network called Pravda published nearly 4 million pro-Russia articles targeting 49 countries last year. The campaign isn’t just about influencing an individual reader. It’s meant to poison the information that trains chatbots — what’s being called large language model (LLM) grooming.

AI bots all want large info databases to comb through, which is why they’ve gotten in trouble for scanning pirated books and the like, but they vary in how much weight they give to different sources, says Joseph Menn, a Washington Post tech reporter specializing in hacking, privacy, and surveillance. He adds that you’re likely to get an incorrect answer when searching for new info. 

Russia particularly has been fine-tuning disinformation techniques as AI chatbots evolve, and one subject that’s a big for them is Ukraine.

“They're trying to dissuade Western European countries from helping Ukraine. And so sometimes they make up stories about fictional Western European figures who go to Ukraine to help, and then they get killed, which is very sad, or would be, if they actually existed. But some of these stories might originate in a conventional state-controlled propaganda organ … but many search engines know to down-rank those in value,” Menn explains. “So then they get laundered. Sometimes it's through officially sponsored or unofficially sponsored telegram channels that are open to the public.”

What’s new, he highlights, is Russia getting involved with sources that appear like ordinary news websites with lots of content. However, the sites have terrible interfaces and no organization. 

“Garbage information sites … are recycling propaganda and laundering it. It's called information laundering, where they take a narrative that is state-approved, and then make it seem like it's a fair news summary of what's been happening.” 

As electronic sources feed each other bad information, Russian trolls then edit that into Wikipedia pages, he says. 

“For example, in Ukraine disinformation, there are tons and tons of Wikipedia pages about military specifics, like this kind of tank has this kind of top speed and blah, blah, blah. And so those pages aren't widely read. So it's easier to be an editor on one of those because Wikipedia is run by volunteer editors. … So these Russian trolls will put in a number of edits, many of which are correct, and then they'll put in some more that are incorrect, and then link back to these content farm sites. So then the AI chatbots … when they search the web, really, really like Wikipedia because it's usually right. And once they get into Wikipedia, there's a better chance of getting into a chatbot’s output.”

Menn says many people tracked this in the past, but information studies are now under attack — the U.S. government is clear about not wanting federal workers doing this, and some academics and think tank researchers are facing lawsuits over allegedly participating in a government censorship project. Plus, over the past week or two, the National Science Foundation has cut grants to misinformation studies, largely in the name of free speech.  

He continues, “Social media companies also used to look for this stuff and do a bigger job of stamping it out. But then they were sued. … So they've backed off as well. … Way fewer outside researchers [are] doing it. The U.S. government is not doing it itself. So where I'm seeing most of the information is from Europe.”

What should consumers look out for? Fortunately, now chatbots are increasingly identifying their sources, Menn says. However, they don’t say whether those sources are high-quality or not, so folks need to read around, and keep in mind that well-known, great news sources don’t suddenly spring into being. Plus, some sites are deliberately misleading, as Russia has tried to make sites look like Fox News affiliates or The Washington Post. 

And so, stick to brands you’ve heard of, and don’t take a chatbot answer as more sophisticated than a traditional search engine, Menn advises. 

“Be aware that the bad guys are getting more bang for the buck in spreading lies with AI. And the AI defenses against it are not particularly built up yet, in part because all these different AI companies are rushing to get out new versions based on more sources of information.”

OSZAR »