WEF Proposes Globalized Plan to Police Online Content Using Artificial Intelligence

transhuman

ATTENTION: Major social media outlets are finding ways to block the conservative/evangelical viewpoint. Click here for daily electronic delivery of the day's top blogs from Virginia Christian Alliance.

The World Economic Forum this month published an article calling for an online censorship system powered by a combination of artificial and human intelligence that one critic suggested would “globalize” the “search for wrongthink.”

By Michael Nevradakis, Ph.D.  | Children’s Health Defense

Warning about a “dark world of online harms” that must be addressed, the World Economic Forum (WEF) this month published an article calling for a “solution” to “online abuse” that would be powered by artificial intelligence (AI) and human intelligence.

The proposal calls for a system, based on AI, that would automate the censorship of “misinformation” and “hate speech” and work to overcome the spread of “child abuse, extremism, disinformation, hate speech and fraud” online.

According to the author of the article, Inbal Goldberger, human “trust and safety teams” alone are not fully capable of policing such content online.

Goldberger is vice president of ActiveFence Trust & Safety, a technology company based in New York City and Tel Aviv that claims it “automatically collects data from millions of sources and applies contextual AI to power trust and safety operations of any size.”

Instead of relying solely on human moderation teams, Goldberger proposes a system based on “human-curated, multi-language, off-platform intelligence” — in other words, input provided by “expert” human sources that would then create “learning sets” that would train the AI to recognize purportedly harmful or dangerous content.

This “off-platform intelligence” — more machine learning than AI per se, according to Didi Rankovic of ReclaimTheNet.org — would be collected from “millions of sources” and would then be collated and merged before being used for “content removal decisions” on the part of “Internet platforms.”

According to Goldberger, the system would supplement “smarter automated detection with human expertise” and will allow for the creation of “AI with human intelligence baked in.”

This, in turn, would provide protection against “increasingly advanced actors misusing platforms in unique ways.”

“A human moderator who is an expert in European white supremacy won’t necessarily be able to recognize harmful content in India or misinformation narratives in Kenya,” Goldberger explained.

However, “By uniquely combining the power of innovative technology, off-platform intelligence collection and the prowess of subject-matter experts who understand how threat actors operate, scaled detection of online abuse can reach near-perfect precision” as these learning sets are “baked in” to the AI over time, Goldberger said.

This would, in turn, enable “trust and safety teams” to “stop threats rising online before they reach users,” she added.

In his analysis of what Goldberger’s proposal might look like in practice, blogger Igor Chudov explained how content policing on social media today occurs on a platform-by-platform basis.

For example, Twitter content moderators look only at content posted to that particular platform, but not at a user’s content posted outside Twitter.

Chudov argued this is why the WEF appears to support a proposal to “move beyond the major Internet platforms, in order to collect intelligence about people and ideas everywhere else.”

“Such an approach,” Chudov wrote, “would allow them to know better what person or idea to censor — on all major platforms at once.”

The “intelligence” collected by the system from its “millions of sources” would, according to Chudov, “detect thoughts that they do not like,” resulting in “content removal decisions handed down to the likes of Twitter, Facebook, and so on … a major change from the status quo of each platform deciding what to do based on messages posted to that specific platform only.”

In this way, “the search for wrongthink becomes globalized,” concludes Chudov.

In response to the WEF proposal, ReclaimTheNet.org pointed out that “one can start discerning the argument here … as simply pressuring social networks to start moving towards ‘preemptive censorship.’”


We would appreciate your donation.

Chudov posited that the WEF is promoting the proposal because it “is becoming a little concerned” as “unapproved opinions are becoming more popular, and online censors cannot keep up with millions of people becoming more aware and more vocal.”

According to the Daily Caller, “The WEF document did not specify how members of the AI training team would be decided, how they would be held accountable or whether countries could exercise controls over the AI.”

In a disclaimer accompanying Goldberger’s article, the WEF reassured the public that the content expressed in the piece “is the opinion of the author, not the World Economic Forum,” adding that “this article has been shared on websites that routinely misrepresent content and spread misinformation.”

However, the WEF appears to be open to proposals like Goldberger’s. For instance, a May 2022 article on the WEF website proposes Facebook’s “Oversight Board” as an example of a “real-world governance model” that can be applied to governance in the metaverse.

And, as Chudov noted, “AI content moderation slots straight into the AI social credit score system.”

 

 

UN, backed by Gates Foundation, also aiming to ‘break chain of misinformation’

The WEF isn’t the only entity calling for more stringent policing of online content and “misinformation.”

For example, UNESCO recently announced a partnership with Twitter, the European Commission and the World Jewish Congress leading to the launch of the #ThinkBeforeSharing campaign, to “stop the spread of conspiracy theories.”

According to UNESCO:

“The COVID-19 pandemic has sparked a worrying rise in disinformation and conspiracy theories.

“Conspiracy theories can be dangerous: they often target and discriminate against vulnerable groups, ignore scientific evidence and polarize society with serious consequences. This needs to stop.”

UNESCO’s director-general, Audrey Azoulay, said:

“Conspiracy theories cause real harm to people, to their health, and also to their physical safety. They amplify and legitimize misconceptions about the pandemic, and reinforce stereotypes which can fuel violence and violent extremist ideologies.”

UNESCO said the partnership with Twitter informs people that events occurring across the world are not “secretly manipulated behind the scenes by powerful forces with negative intent.”

UNESCO issued guidance for what to do in the event one encounters a “conspiracy theorist” online: One must “react” immediately by posting a relevant link to a “fact-checking website” in the comments.

UNESCO also provides advice to the public in the event someone encounters a “conspiracy theorist” in the flesh. In that case, the individual shold avoid arguing, as “any argument may be taken as proof that you are part of the conspiracy and reinforce that belief.”

The #ThinkBeforeSharing campaign provides a host of infographics and accompanying materials intended to explain what “conspiracy theories” are, how to identify them, how to report on them and how to react to them more broadly.

According to these materials, conspiracy theories have six things in common, including:

  • An “alleged, secret plot.”
  • A “group of conspirators.”
  • “‘Evidence’ that seems to support the conspiracy theory.”
  • Suggestions that “falsely” claim “nothing happens by accident and that there are no coincidences,” and that “nothing is as it appears and everything is connected.”
  • They divide the world into “good or bad.”
  • They scapegoat people and groups.

UNESCO doesn’t entirely dismiss the existence of “conspiracy theories,” instead admitting that “real conspiracies large and small DO exist.”

However, the organization claims, such “conspiracies” are “more often centered on single self-contained events, or an individual like an assassination or a coup d’état” and are “real” only if “unearthed by the media.”

In addition to the WEF and UNESCO, the United Nations (UN) Human Rights Council earlier this year adopted “a plan of action to tackle disinformation.”

The “plan of action,” sponsored by the U.S., U.K., Ukraine, Japan, Latvia, Lithuania and Poland, emphasizes “the primary role that governments have, in countering false narratives,” while expressing concern for:

“The increasing and far-reaching negative impact on the enjoyment and realization of human rights of the deliberate creation and dissemination of false or manipulated information intended to deceive and mislead audiences, either to cause harm or for personal, political or financial gain.”

Even countries that did not officially endorse the Human Rights Council plan expressed concerns about online “disinformation.”

For instance, China identified such “disinformation” as “a common enemy of the international community.”

An earlier UN initiative, in partnership with the WEF, “recruited 110,000 information volunteers” who would, in the words of UN global communications director Melissa Fleming, act as “digital first responders” to “online misinformation.”

The UN’s #PledgeToPause initiative, although recently circulating as a new development on social media, was announced in November 2020, and was described by the UN as “the first global behaviour-change campaign on misinformation.”

The campaign is part of a broader UN initiative, “Verified,” that aims to recruit participants to disseminate “verified content optimized for social sharing,” stemming directly from the UN communications department.

Fleming said at the time that the UN also was “working with social media platforms to recommend changes” to “help break the chain of misinformation.”

Both “Verified” and the #PledgeToPause campaign still appear to be active as of the time of this writing.

The “Verified” initiative is operated in conjunction with Purpose, an activist group that has collaborated with the Bill & Melinda Gates Foundation, the Rockefeller Foundation, Bloomberg Philanthropies, the World Health Organization, the Chan Zuckerberg Initiative, Google and Starbucks.

Since 2019, the UN has been in a strategic partnership with the WEF based on six “areas of focus,” one of which is “digital cooperation.”

© 08/19/22 Children’s Health Defense, Inc. This work is reproduced and distributed with the permission of Children’s Health Defense, Inc. Want to learn more from Children’s Health Defense? Sign up for free news and updates from Robert F. Kennedy, Jr. and the Children’s Health Defense. Your donation will help to support us in our efforts.

The views and opinions expressed in this article are those of the authors and do not necessarily reflect the views the Virginia Christian Alliance

About the Author

LifeSiteNews
LifeSiteNews.com is a non-profit Internet news service dedicated to issues of life, family, and many related issues. It was launched in September 1997 to especially provide an alternative to the mainstream news that was either ignoring or providing highly slanted reporting on these issues and on the activities and statements of pro-life, pro-family organizations in the world. LifeSiteNews Daily News reports and information pages are used by numerous organizations and publications, educators, professionals and political, religious and life and family organization leaders and grassroots people across North America and internationally. LifeSiteNews.com Daily News reports are widely circulated reports on important developments in the United States, Canada and around the world. Their purpose is to provide balance and more accurate coverage on the issues we focus on than is usually given by other media. LifeSite news reports are available by free daily email subscription and on LifeSiteNews.com.
[print-me]