Australia’s online safety regulator is stepping up its campaign to protect children from sexual exploitation, delivering legal notices to a sweep of tech companies that will compel them to do more to stamp out online child sex abuse.
The notices, issued by eSafety Commissioner Julie Inman Grant, will require Apple, Google, Meta and Microsoft to report to the regulator every six months about the measures they have in place to suppress abuse material.
Notices have also been sent to Discord, Snap, Skype and WhatsApp, which is a messaging service owned by Meta.
The tech giants must explain to the commission how they are tackling child abuse material, livestreamed abuse, online grooming, sexual extortion and where applicable the production of “synthetic” or deepfaked child abuse material created using generative AI.
The dramatic legal action followed the discovery of a range of security concerns in 2022 and 2023 around how platforms protect children from abuse.
“When we sent notices to these companies back in 2022 and 2023, some of their answers were alarming but not surprising as we had suspected for a long time there were significant gaps and differences across services’ practices,” Ms Grant said.
“In our subsequent conversations with these companies, we still haven’t seen meaningful changes or improvements to these identified safety shortcomings.
“Apple and Microsoft said in 2022 that they do not attempt to proactively detect child abuse material stored in their widely used iCloud and OneDrive services.
“This is despite the fact it is well-known that these file storing services serve as a haven for child sexual abuse and pro-terror content to persist and thrive in the dark.
“We also learnt that Skype, Microsoft Teams, FaceTime and Discord did not use any technology to detect live-streaming of child sexual abuse in video chats.
“This is despite evidence of the extensive use of Skype1, in particular, for this longstanding and proliferating crime.
“Meta also admitted it did not always share information between its services when an account is banned for child abuse, meaning offenders banned on Facebook may be able to continue perpetrating abuse through their Instagram accounts and offenders banned on WhatsApp may not be banned on either Facebook or Instagram.”
The commission said eight different Google services, including YouTube, are not blocking links to websites that are known to contain child abuse material, despite the availability of databases of these known abuse websites that many services use.
eSafety investigators also conclude Snapchat is not using any tools to detect child grooming in chats.
The internet and the giant tech platforms that dominate it allow predators to prey on children from positions of anonymity.
A 2023 report from the National Center for Missing and Exploited Children recorded more than 36.2 million reports of suspected child sexual exploitation on electronic service providers.
Amazon Photos reported 25,497 instances of apparent abuse, while Dropbox made 54,045 reports.
Facebook reported 17,838,422 instances and Instagram reported 11,430,007.
Snapchat reported 713,055 instances of abuse, while X, formerly known as Twitter, made 273,416 reports.
Apple made 267 reports, despite its vast cloud infrastructure.
US based providers are legally required to report instances of “apparent child pornography” to the centre’s CyberTipline when they become aware of them, but there are no legal requirements for proactive efforts to detect this content or what information an ESP must include in a report, the centre said.
Ms Grant, who is charged by parliament to ensure online safety in Australia, said the notices would inform the commission if the tech behemoths had made any improvements in their safety measures since 2022 and hold them “accountable for harm still being perpetrated against children on their services”.
“We know that some of these companies have been making improvements in some areas and this is the opportunity to show us progress across the board,” she said.
Key potential safety risks considered in this round of notices include the ability for adults to contact children on a platform, risks of sexual extortion, as well as features such as livestreaming, end-to-end encryption, generative AI and recommender systems, the commission said.
Compliance with a notice is mandatory and there may be financial penalties of up to $782,500 a day for services that do not respond.
The companies will have until February 15, 2025 to provide their first round of responses.