The policy changes were announced in a blog post in which Facebook outlined how content related to self-harm and suicide will be handled moving forward.
They said that they will “no longer allow graphic cutting images to avoid unintentionally promoting or triggering self-harm.”
And although they say the move is aimed at helping people, they stated that the policy will even apply “when someone is seeking support or expressing themselves to aid their recovery."
They will also censor images of healed cuts that were self-inflicted, placing them behind a “sensitivity screen” that users must click on if they wish to access the content behind it.
Facebook-owned Instagram, meanwhile, has said it will ban graphic images of self-harm following objections in the wake of a British teen’s suicide; the 14-year-old's father said that the platform contributed to her decision to kill herself.
Instagram has also said it will deprioritize content depicting self-harm. This type of material will also be removed from the platform’s Explore tab and the Instagram suggestion algorithm. This means such posts will no longer appear in searches and relevant hashtags. Facebook is also hiring an expert in health and well-being to serve on a safety team.
They say that images of self-harm, even those posted by people who are admitting their struggle, may unintentionally promote self-harm. So far, however, the measures have not been entirely successful; at the end of August, a U.S. Army vet took his own life during a Facebook live stream, which eventually made its way to other social media platforms. The victim’s friends have said the livestream had been reported to Facebook before the suicide occurred but nothing was done.
Moreover, some experts insist that these policies will only serve to exacerbate the stigma that is felt by people with suicidal thoughts and non-suicidal self-injury and make them feel even more socially isolated.
This is just the latest in a long string of censorship at the social media platform. Facebook has also been using a system whereby articles that its fact checkers deem “fake news” will be flagged as “disputed” with a link to an article that explains why. If people try to share such articles, they will be asked if they are sure they wish to proceed. Stories marked "disputed" will also be pushed downward in the news feed.
Of course, in reality, the system is being used to silence voices that speak out about the dangers of vaccines and other topics that ultra-liberal Big Tech doesn’t want people to know about. Their so-called fact checkers come from very left-leaning biased agencies and news outlets.
They have also been censoring doctors who dare to speak out about the effectiveness of hydroxychloroquine (HCQ) as a treatment for COVID-19. As a cheap, generic drug, this threatens the profits of Big Pharma’s expensive, intravenously delivered and only-sometimes-effective drug, Remdesivir.
For example, Facebook, YouTube, Google and Twitter all pulled a video conference known as the White Coat Summit in which frontline doctors across the country talked about HCQ, zinc and other approaches that have been used successfully to treat the disease.
If Facebook and its peers were really so interested in helping people, they would let experts tell the truth about dangerous interventions like vaccines instead of silencing them to keep the Big Pharma ad money rolling in. They would encourage open dialogue rather than trying to keep the independent media and conservative voices from getting airtime. They would spread the word about the dangers of 5G rather than trying to hide the truth. Soon, everything they disagree with will be considered a form of self-harm (like not getting vaccines) or labeled as “disputed.”
Sources for this article include: