Social media “recommends graphic content to users from the age of 13”
Social media accounts linked to children were “directly targeted” with graphical content within 24 hours of their creation, according to a new report on online safety.
He says accounts created for the study based on real children as young as 13 received content on eating disorders, self-harm and sexualized images.
The study by the 5Rights Foundation’s child safety group and England’s Children’s Commissioner Dame Rachel de Souza said the research was “alarming and upsetting” and called for mandatory rules on the way online services are designed to be introduced.
An age-appropriate design code will come into effect in September, with the Information Commissioner’s Office (ICO) able to impose fines and other penalties on services that do not, by design, incorporate new security standards for the protection of the data of users under the age of 18.
But 5Rights said more needs to be done to build broader child safety into online platforms right from the design process.
He says that despite knowing the ages of younger users, social media platforms allowed them to be contacted, unsolicited, by adults and recommended potentially harmful content.
Facebook, Instagram and TikTok were the platforms named in the report, which was produced with research firm Revealing Reality.
In response, all three services said they take the safety of young users seriously.
“The results of this research are alarming and heartbreaking. But just as risks are designed for the system, they can be designed, ”said 5Rights President Baroness Kidron.
“It’s time to adopt mandatory design standards for all services that impact or interact with children, to ensure their safety and well-being in the digital world.
“In all other contexts, we provide mutually agreed protections for children. A nightclub cannot serve a pint to a child, a retailer cannot sell them a knife, a theater cannot allow them to watch an R18 movie, a parent cannot deny them an education, and a pharmaceutical company cannot. not give them a dose of adult medication.
“These protections do not apply only when harm is proven, but in anticipation of the risks associated with their age and changing abilities.
“These protections are rooted in our legal system, our treaty obligations and our culture. Everywhere except in the digital world.
She added that the study revealed a “deep recklessness and disregard for children” who were “embedded” in the features, products and services of the digital world.
Dame Rachel said: “This research highlights the huge range of risks children currently face online.
“We do not allow children to access services and content inappropriate for them, such as pornography, in the offline world.
“They shouldn’t be able to access it in the online world either. I look forward to working with government, parents, online platforms and organizations like 5Rights to create a kid friendly online world.
Online safety activist Ian Russell, who started a foundation on behalf of his daughter Molly after she committed suicide after viewing self-harm and suicide content online, said research showed “how algorithmic amplification actively connects children to harmful digital content, unfortunately as I know only too well, sometimes with tragic consequences”.
“In our digital wilderness, young people need organized routes to explore, allowing them to move around while staying safe. Routes to trusted areas of support, especially with regards to mental health, should be better marked so that help can be provided whenever needed, ”he said.
“All of us – governments, businesses and individuals alike – need to act quickly to fix everything digital.
“We need to find ways to eliminate online harm and cultivate goodness, if our digital world is to prosper as it should.
“Above all, we need to put safety first, especially for children when they are online. We must work to stop digital wolves from hunting down vulnerable people and destroying young lives. “
Responding to the report, a spokesperson for TikTok said, “Our top priority is to promote a safe and positive experience on TikTok, and we removed 62 million videos in the first quarter of 2021 for violating our community guidelines, including 82 million videos. % have been deleted previously. they had been given a unique sight.
“Protecting our young users is vitally important, and TikTok has taken cutting-edge steps to promote a safe and age-appropriate experience for teens.
“We turned off direct messaging for under 16s, made 13-15 year old accounts private by default, and introduced family twinning so parents and guardians can control settings such as search and direct messaging. “
A spokesperson for Facebook, which also owns Instagram, said: “We agree that our apps should be designed with the safety of young people in mind.
“We don’t allow pornographic content or content that encourages self-harm, and we’re also taking more aggressive steps to keep teens safe, including preventing adults from sending DMs to teens who aren’t following them. We look forward to sharing more in the coming weeks.
“It should be noted, however, that the methodology of this study is weak in a few areas: First, it seems that they drew sweeping conclusions about the overall experience of teens on Instagram from a handful of avatar accounts.
“Second, the posts it highlights are not recommended for these avatar accounts, but actively sought or followed.
“Third, many of these examples predate changes we made to provide support for people looking for content related to self-injury and eating disorders. “