Business

Australia says it may go after app stores, search engines in AI age crackdown

Australia’s Efforts to Regulate AI Services for User Age Verification

The Australian internet regulator has issued a warning to push search engines and app stores to block artificial intelligence services that do not verify user ages. A recent Reuters review revealed that over half of these services have not made any public steps to comply with the upcoming deadline.

This move by Australia is among the most aggressive globally to regulate AI companies, which have been facing lawsuits for failing to prevent self-harm or violence. Researchers have also raised concerns about the negative impact of such platforms on youth mental health compared to social media.

Australia made headlines in December for becoming the first country to ban social media for teenagers due to mental health concerns. The country is now extending this crackdown to AI by imposing age restrictions on the content accessible through these technologies.


Australia in December became the first country to ban social media for teenagers, citing mental health concerns. REUTERS

Starting March 9, internet services in Australia must restrict access to pornography, extreme violence, self-harm, and eating disorder content for users under 18 or face fines of up to 49.5 million Australian dollars ($35 million).

The regulator, known as eSafety, has stated that they will take action against non-compliance, including imposing restrictions on gatekeeper services like search engines and app stores.

Several AI companies, including OpenAI and Character.AI, have faced legal challenges regarding their interactions with young users. OpenAI recently revealed that they deactivated the account of a teen mass shooting suspect in Canada prior to the attack.

While Australia has not yet reported incidents of chatbot-related violence or self-harm, concerns have been raised about young children spending excessive time interacting with AI-powered chatbots.

eSafety has expressed worries about AI companies using advanced techniques to engage young users excessively with chatbots. Platforms like Apple and Google are also taking steps to comply with the new regulations in Australia.


A week before Australia’s deadline, of the 50 most popular text-based AI products, nine had rolled out or announced plans for age assurance systems. AFP via Getty Images

As the deadline approaches, more AI products are implementing age assurance systems to comply with the new rules. However, there are still platforms that have not taken any steps to adhere to the regulations.

It is crucial for AI companies operating in Australia to understand and meet their legal obligations to ensure the safety and well-being of young users.


OpenAI and companion chatbot startup Character.AI have faced wrongful death lawsuits over their interactions with young users. AFP via Getty Images

While some AI products have implemented age assurance systems, there are still many that lack proper filtering or age verification measures. It is essential for these companies to prioritize the safety of young users when developing AI technologies.

Overall, the efforts by Australia to regulate AI services for user age verification are aimed at protecting the well-being of young individuals and ensuring a safe online environment.

Related Articles

Back to top button