But the WEF isn't really focused on economic problems now being experienced by the masses. Rather, the latest policy focus has to do with ensuring that social media platforms can automatically censor as much countervailing information as possible using the latest technology.
Specifically, as reported by Reclaim The Net, the WEF wants social media platforms to incorporate artificial intelligence (AI) in order to combine with human censors to strip the platforms of any political, social, cultural, economic, or policy viewpoints the globalist elite disagree with.
“By uniquely combining the power of innovative technology, off-platform intelligence collection and the prowess of subject-matter experts who understand how threat actors operate, scaled detection of online abuse can reach near-perfect precision,” the organization claims.
Reclaim The Net noted:
But what is this supposed to mean?
At some point toward the end, the WEF finally spits it out (but spoiler: it still doesn’t make a whole lot of sense): instead of relying on what is throughout the article continuously and erroneously referred to as “AI” – the WEF says it is proposing “a new framework: rather than relying on AI to detect at scale and humans to review edge cases, an intelligence-based approach is crucial.”
It’s well worth quoting the entire techno-bubble word salad that is supposed to be the sales pitch of the writeup.
“By bringing human-curated, multi-language, off-platform intelligence into learning sets, AI will then be able to detect nuanced, novel abuses at scale, before they reach mainstream platforms. Supplementing this smarter automated detection with human expertise to review edge cases and identify false positives and negatives and then feeding those findings back into training sets will allow us to create AI with human intelligence baked in," the WEF notes in its blog post.
In this way – “trust and safety teams can stop threats rising online before they reach users,” the organization continued.
"As the internet has evolved, so has the dark world of online harms. Trust and safety teams (the teams typically found within online platforms responsible for removing abusive content and enforcing platform policies) are challenged by an ever-growing list of abuses, such as child abuse, extremism, disinformation, hate speech and fraud; and increasingly advanced actors misusing platforms in unique ways," the post adds.
The WEF post goes on to argue that platforms adding more human censors isn't enough: That technology should be used to impose more of it programmatically.
"The solution, however, is not as simple as hiring another roomful of content moderators or building yet another block list. Without a profound familiarity with different types of abuse, an understanding of hate group verbiage, fluency in terrorist languages and nuanced comprehension of disinformation campaigns, trust and safety teams can only scratch the surface," the post argues.
Of course, what constitutes "misinformation" and "abuse" is always highly arbitrary; far-left platforms think most conservative viewpoints are wrong and 'triggering,' as conservatives themselves are compared to and likened to "Nazis" and "fascists" -- therefore, their opinions are fascist.
"A more sophisticated approach is required. By uniquely combining the power of innovative technology, off-platform intelligence collection and the prowess of subject-matter experts who understand how threat actors operate, scaled detection of online abuse can reach near-perfect precision," the WEF Forum post claims.
The fact is, the mainstream social media apps and platforms are going to be used from here on out to shape and guide public opinion, not serve as a true reflection of it. Better to just disengage from them and go somewhere your voice isn't censored or muted, and where you can think for yourself.