Facebook Caught Manipulating Vaccine Facts To Trick Conservative Users

Whistleblowers from inside Facebook have teamed up with Project Veritas to reveal some extremely concerning censorship tactics being employed by the social media giant.

The Facebook whistleblowers are alleging that far-left company is pushing an effort to censor content expressing vaccine hesitancy on its platform.

A new Project Veritas report shows leaked internal memo explaining “Vaccine Hesitancy Comment Demotion” which shows the “goal” is to “drastically reduce user exposure to vaccine hesitancy.”

Another leaked document addressed “Borderline Vaccine (BV) Framework” that delves into how to classify such content with another expressed “goal” of to “identify and tier the categories of non-violating content that could discourage vaccination in certain contexts, thereby contributing to vaccine hesitancy or refusal,” adding “We have tiered these by potential harm and how much context is required in order to evaluate harm.”

“Facebook uses classifiers in their algorithms to determine certain content… they call it ‘vaccine hesitancy.’ And without the user’s knowledge, they assign a score to these comments that’s called the ‘VH Score,’ the ‘Vaccine Hesitancy Score,’” one Facebook insider told Project Veritas founder James O’Keefe.

“And then based on that score will demote or leave the comment alone depending on the content within the comment.”

The insider, who is described by O’Keefe as a “data center technician” for Facebook, revealed that the tech giant was running a “test” on 1.5% of its 3.8 billion users with the focus on the comments sections on “authoritative health pages.”

They’re trying to control this content before it even makes it onto your page before you even see it,” the insider told O’Keefe.

The ratings are divided into two tiers, one being “Alarmism & Criticism” and the other being “Indirect Vaccine Discouragement” which includes celebrating vaccine refusal and “shocking stories” that may deter others from taking the vaccine.

The algorithm flags key terms in comments to determine whether or not it can remain in place but allows human “raters” to make a ruling if the algorithm cannot do so itself.

Another Facebook insider described as a “Data Center Facility Engineer” compared Facebook’s actions to being in an “abusive” relationship where they’re “not allowing their spouse to speak out about the things that are going on in their marriage… and limiting their voice… It’s very incriminating, in my opinion.”

The document added that the algorithm was the only tool Facebook had at the time to address “the high prevalence of vaccine hesitancy in Health comments,” and that a better “detection” tool would be implemented when available.

In response to the leaked documents, Facebook told Project Veritas, “We proactively announced this policy on our company blog and also updated our help center with this information.”

In March, Facebook CEO Mark Zuckerberg announced that the company would add labels to posts about vaccines to counter misinformation.

“For example, we’re adding a label on posts that discuss the safety of COVID-19 vaccines that notes COVID-19 vaccines go through tests for safety and effectiveness before they’re approved,” Zuckerberg wrote.

Facebook and its peers in the social media world have certainly kicked up their narrative controlling tactics throughout the COVID-19 pandemic – and it doesn’t appear as though they plan on stopping anytime soon.

Author: Scott Mason