I apologize in advance for the length of this article, but there is a lot of information in the article linked.
On April 26th, PC Magazine posted an article titled, “Why Disinformation and Misinformation Are More Dangerous Than Malware.”
Here are some highlights from that article:
“The overwhelming majority of people who are ever going to see a piece of misinformation on the internet are likely to see it before anybody has a chance to do anything about it,” according to Yoel Roth, the former head of Trust and Safety at Twitter.
When he was at Twitter, Roth observed that over 90% of the impressions on posts were generated within the first three hours. That’s not much time for an intervention, which is why it’s important for the cybersecurity community to develop content moderation technology that “can give truth time to wake up in the morning,” he says.
“It’s a hacking of people problem,” lamented panel moderator Ted Schlein, chairman and general partner at Ballistic Ventures, a cybersecurity venture capital firm. “In my view, if we spend so much time, energy, and dollars fighting to protect our technology and our systems, shouldn’t we be doing the same for people?”
The cybersecurity community should focus on creating ways to detect and shut down disinformation while mitigating its effects, Schlein argued. Presumably, this call to action includes targeting misinformation, which differs from disinformation as it relates to intent. (Misinformation is defined(Opens in a new window) as “incorrect or misleading information,” regardless of intent. Disinformation is a lie told deliberately to influence opinion or cover up a fact.)
I totally disagree with his perspective. The responsibility is not with the platform–the responsibility is with the reader to take the time to evaluate the information and do their own research. Saying that a platform should detect and shut down disinformation leads to censorship. It also brings up the question of who decides what is misinformation or disinformation. Remember that during the 2020 election, articles about Hunter Biden’s laptop were censored and declared misinformation or disinformation. How did that work out?
The article also notes:
Here are some recent examples of disinformation campaigns and misinformation spreaders caught in the act:
-
- Medical professionals have been complaining for years about patients taking dangerous advice from “expert influencers” on platforms such as TikTok and YouTube.
- A Chinese disinformation operative exposed a plan to discourage Americans from voting by spreading lies online.
- Last year, climate change deniers found themselves de-platformed by Pinterest.
- Spotify darling Joe Rogan briefly found himself in hot water for airing conversations with COVID-19 deniers.
- Experts say the “playbook” for disseminating the kind of disinformation seen in the run-up to the 2016 general election in the United States is being exploited around the world.
- As PCMag’s Chandra Steele writes, antisemitic conspiracy theorists are making their voices heard on Twitter, and the platform is failing to protect its users.
Why is the platform required to protect their users? The users can make decisions as to what they choose to believe and which platforms they choose to frequent.
Mr. Roth also stated that truth can change. If truth changes, was it truth to begin with?
The article reports:
Roth began his part of the panel discussion by noting that it’s natural for knowledge and perceived truths to change over time, and “something that is known to be true with absolute certainty one day could be known to be totally false another.”
Roth cautioned that misinformation is not actually like malware because malware is software that has been designed to generate a specific outcome every time it runs. Disinformation doesn’t guarantee the intended results. Effectively tackling misinformation and disinformation online will require dynamism and flexibility from cybersecurity developers, Roth said.
Please follow the link above to read the entire article. There is a section at the end that reminds us of the First Amendment. Not all media platforms are happy that The First Amendment exists. We need to keep that in mind.