Jessica Goodfellow
May 15, 2019

Facebook introduces ‘one-strike’ live-streaming policy

Following the Christchurch terror attack the social-media platform is tightening its live-streaming policies.

Facebook introduces ‘one-strike’ live-streaming policy

Facebook is introducing a ‘one strike’ policy to Live and has committed US$7.5 million in research funds to find better ways to detect harmful content on its platform in the aftermath of the Christchurch terror attack.

Previously, if a user posted content that violated its community standards either in Live or elsewhere, Facebook simply took down the post.

If the user continued to post violating content they would be blocked them from using Facebook for a certain period of time, and in some cases banned altogether, either because of repeated low-level violations or a single egregious violation.

From now on, anyone who violates Facebook’s most serious policies will be restricted from using Live for a set period of time starting on their first offense.

This would mean that if a user shared a link to a statement from a terrorist group with no context, that user would now be immediately blocked from using Live for a set period of time.

The social network plans to extend these restrictions to other areas of its platform over the coming weeks, beginning with preventing those same people from creating ads on Facebook.  

“We recognise the tension between people who would prefer unfettered access to our services and the restrictions needed to keep people safe on Facebook,” the company said in a blog post announcing the change. “Our goal is to minimize risk of abuse on Live while enabling people to use Live in a positive way every day.”

The tightening down looks to specifically address concerns over Facebook’s role in facilitating the spread of terrorist content, following the livestream of the Christchurch terrorist attack in March.

“Following the horrific terrorist attacks in New Zealand, we’ve been reviewing what more we can do to limit our services from being used to cause harm or spread hate,” the platform said.

The platform is also working with academic establishments to research new techniques to detect manipulated media across images, video and audio, and distinguish between unwitting posters and adversaries who intentionally manipulate videos and photographs.

Facebook's US$7.5 million research investment will be undertaken with leading academics from The University of Maryland, Cornell University and The University of California, Berkeley.

“One of the challenges we faced in the days after the Christchurch attack was a proliferation of many different variants of the video of the attack. People — not always intentionally — shared edited versions of the video, which made it hard for our systems to detect,” the platform said.

“We realized that this is an area where we need to invest in further research.”

Source:
Campaign Asia

Related Articles

Just Published

9 hours ago

Agency Report Cards 2024: We grade 25 APAC networks

The grades are in for Campaign Asia's 22nd annual evaluation of APAC agency networks. Subscribe to read our detailed analyses.

9 hours ago

Agency Report Card 2024: VML

Working through a complex merger in 2024, VML remained steady and stable. Now it's time to show the world how it can flex its scale to creative benefit for all to see.

10 hours ago

'If it doesn’t entertain, don’t even enter': ...

Nearly 80% of the Film Lion winners used humour as a narrative style. McCann’s APAC chief creative officer and Film juror Valerie Madon explains why funny works, short-form is trickier than it looks, and why the best films sell more than just a feeling.

10 hours ago

Canva plugs MagicBrief into the creative feedback loop

By acquiring MagicBrief, Canva is blending AI-powered insights with real-time design iteration—turning creative guesswork into scalable, data-backed storytelling for enterprise teams.