Home / Technology / Australia Eyes AI Age Verification Crackdown
Australia Eyes AI Age Verification Crackdown
2 Mar
Summary
- Regulator may block AI services failing age checks.
- March 9 deadline for AI to restrict youth content.
- Many AI services not meeting compliance by deadline.

Australia's internet regulator is preparing to take strong action against artificial intelligence services that do not verify user ages. A critical deadline is approaching on March 9, after which AI platforms must restrict access to pornography, extreme violence, and content related to self-harm and eating disorders for those under 18. Failure to comply could result in substantial fines of up to A$49.5 million.
This proactive stance is one of the most rigorous global efforts to regulate AI companies, which are increasingly facing lawsuits for their role in harmful content. Researchers also highlight concerns about AI's negative impact on youth mental health, potentially exceeding that of social media.
Despite the impending deadline, a review found that over half of the most popular text-based AI products have not made public their plans for age verification. Many services are instead opting for broad content filters or complete service blocks for Australian users under 18. Some AI providers are being specifically investigated for issues related to synthetic sexualized imagery and alleged encouragement of harmful behavior.
Concerns are mounting that AI companies are employing sophisticated techniques, including emotional manipulation, to foster excessive usage among young people. While Australia has not yet reported specific incidents of chatbot-linked violence, the regulator has been informed of children as young as 10 spending up to six hours daily interacting with these tools. Major app store operators like Apple have stated they will implement "reasonable methods" for age restriction, though specific details remain unclear.




