Simon Gwynn
Oct 12, 2021

Facebook to encourage teens to ‘take a break’ from Instagram, Nick Clegg says

Social media giant’s VP of global affairs also endorsed greater regulation of tech platforms.

Nick Clegg: interviewed on CNN
Nick Clegg: interviewed on CNN

Facebook will introduce three new tools to tackle harmful use of Instagram by teens, its vice-president of global affairs Nick Clegg said in an interview yesterday (10 September).

Speaking on CNN’s State of the Union, Clegg said the company would introduce a feature called “take a break”, prompting young users to log off Instagram where evidence suggests they could be using it in a potentially harmful way.

Alongside this, it will prompt teen users who are looking at the same content repeatedly to look at something different, and introduce new controls for their parents.

“We're going to introduce new controls for adults of teens, on an optional basis obviously, so that adults can supervise what their teens are doing online,” he said. 

“Secondly, we're going to introduce something which I think will make a considerable difference, which is where our systems see that a teenager is looking at the same content over and over again and it’s content which may not be conducive to their wellbeing, we will nudge them to look at other content.”

In other circumstances, “we will be prompting teens to simply just take a break from using Instagram, and I think these are exactly the kinds of things which are in line with ongoing work we've been doing in co-operation with experts for many years". He added: "We clearly want to redouble our efforts going forward.”

The moves, which come alongside a decision to pause development on the planned teen-focused platform Instagram For Kids, follow Facebook whistleblower Frances Haugen giving evidence to the US Senate last week, in which she accused the company of prioritising “profit over safety”.

On 25 October, Haugen will give evidence to the UK Parliament’s Joint Committee on the draft Online Safety Bill, through which the government plans to create a new regulatory framework to tackle harmful online content.

Conservative MP Damian Collins, who chairs the Committee, said: “Frances Haugen's evidence has so far strengthened the case for an independent regulator with the power to audit and inspect the big tech companies.

“Mark Zuckerberg and Nick Clegg have complained that she has only presented a partial view of the company, whereas they want to stop any insights into how the company manages harmful content to get into the public domain, unless they have personally approved them.

“There needs to be greater transparency on the decisions companies like Facebook take when they trade off user safety for user engagement. We look forward to discussing these issues with Frances Haugen.”

CNN host Dana Bash used the interview to ask about Clegg’s views on greater regulation of tech companies—something Facebook has long said it supports.

On possible legislation requiring parental consent for children under 16 to use social media, Clegg said: “Of course, if lawmakers want to set down rules for us and for TikTok, YouTube and Twitter about exactly how young people should operate online, we, of course, will abide by the law and I think it's right that this is a subject of great bipartisan interest and discussion, because there's nothing more important to any of us than that than our kids, and I think, by the way, regulation would be very useful in many ways.”

He also endorsed the idea of providing access to Facebook’s algorithms for regulators. “Yes, we need greater transparency so the systems that we have in place… including not only the 40,000 people we employ on this but also the multibillion-dollar investments we've made into algorithmic [and] machine learning systems, should be held to account—if necessary by regulation—so that people can match what our systems say they're supposed to do from what actually happens.”

But he spoke in defence of employing algorithms to order the content users see, after Haugen claimed that Facebook’s use of them was dangerous.

“If you remove the algorithms, which is I think one of [Haugen’s] central recommendations, the first thing that would happen is that people would see more, not less, hate speech, more, not less misinformation, because these algorithms are designed precisely to work almost like giant spam filters to identify and deprecate bad content, and you know I really do think we should remember that technology, of course it has downsides, but also has very powerful, positive effects.”

Source:
Campaign UK

Related Articles

Just Published

12 hours ago

Tech On Me: Are Chinese tech giants doing enough to ...

In this week's edition: Chinese social media platforms take on xenophobia, Australia looks to prevent teens from using social media, Meta's plans to introduce generative AI into the metaverse, among other tech news in the region.

12 hours ago

Samsung’s new global campaign taps travel bug to ...

The work by BBH Singapore shows how new AI features in the Galaxy S24 like 'circle to search' turn travel photos into mobile tools.

13 hours ago

Agency Report Cards 2023: We grade 31 APAC networks

Campaign Asia-Pacific presents its 21st annual evaluation of APAC agency networks based on their 2023 business performance, innovation, creative output, awards, action on DEI and sustainability, and leadership.

13 hours ago

Agency Report Card 2023: Wavemaker

With a sharp ascent to the top spot in Campaign’s Media rankings for 2023, Wavemaker had a solid year of performance even amidst an uncertain economic landscape.