Misinformation is another weapon in Israel-Hamas war, experts say
SAN FRANCISCO - As the Israel-Hamas war is being waged on the ground and in the air, another battle is raging online and on social media.
In the war to win over hearts and minds, the spread of misinformation and disinformation has exploded.
"There are a lot of fake images generated by AI, fake audios, fake videos, fake tweets. Things from video games," said Hani Farid, a UC Berkeley professor of computer science who researches fake content.
"This is categorically different than what we saw in terms of mis-and-disinformation and generative AI two years ago, and that shows how fast the technology is developing," said Farid.
Since October 7, when Hamas attacked Israel, graphic and emotionally-charged images and assertions have gone viral on social media.
News fact-checkers have been working to research and debunk fake posts, but experts say the speed of sharing and re-posting content makes correcting fake content extremely difficult.
"And I think this is really the most nefarious problem we're facing in the age of generative AI, is people suddenly don't know what to trust," said Farid.
One of those posts, such an early claim that Hamas beheaded 40 babies, has been found by news fact-checkers to be a misrepresentation of a report on October 10th by Nicole Zedek, a journalist with i24 News.
Zedek's online report says "Some soldiers say they found babies with their heads cut off, entire families gunned down in their beds. About 40 babies and young children have been taken out on gurneys — so far."
Still, the misinformation about "40 beheaded babies" was shared and went viral, garnering millions of views.
Farid says, when Israel later did post two photos of babies killed in the attack, skepticism and mistrust dominated the discussions.
"The conversation was not about what was in the image. It was about the dispute of whether it was real or fake," said Farid, "Even if one percent of the images, the audio and the video and the tweets are fake, it poisons everything."
Other examples have included a fake White House memo about President Biden was sending aid to Israel.
Another example is an old video that was altered to falsely claim that North Korean's president blamed President Biden for the Israel-Hamas war.
A video of purported explosions and rocket attacks that was widely circulated, was found by news fact-checkers to have come from a video game.
"It's something social media companies should be making a number one priority," said Corynne McSherry, Legal Director of the Electronic Frontier Foundation.
McSherry says to effectively monitor content, social media companies need to hire more people with the language skills and cultural knowledge to make informed decisions.
McSherry says the way social media companies are structured to do business is also a factor in the spread of misinformation.
"They want engagement as they say, and inflammatory rhetoric gets engagement. So the other problem is that there are business models that are built on dangerous speech," said McSherry.
Farid says new technology could be implemented to authenticate the date and location of images and audio on devices.
"The technology is there but it hasn't been fully deployed. And we has to be deployed in two ways. One is we need the boots on the ground using the technology, that means it has to be in the devices. But then we also need the social media companies to respect those cryptographic signatures so we know what we can trust," said Farid.
McSherry says users are one of the most important factors in stopping the flow of misinformation.
"Before you share something, what's the source? How much do you know about the source of this information? Has it been vetted by anybody first?" said McSherfy.