Meta revealed that it is working on a new AI powered tool that will help determine if a person signing up for Instagram is a teen or an adult.
Several government agencies around the world are scrutinizing Meta, the parent company of Instagram, accusing the platform of posing risks to children’s physical and mental health from excessive use.
In a blog post, Meta announced that it is working on a new AI-powered tool that could help catch teens who are lying about their age. While Meta-owned platforms like Instagram and Facebook require users to enter their age when signing up, many users lie about their age.
Meta says the AI model, dubbed Adult Classifier can “help determine whether someone is an adult (18 or over) or a teen (13-17)” and will automatically apply the appropriate privacy settings.
“Meta Introduces AI Tool to Detect Teen Users, Alters Privacy Settings and Messaging Restrictions”
The AI model detects whether a person is a teen or an adult by analyzing signals like profile information, account creation date, and their content and interactions with others.
If the ‘adult classifier’ detects a teen using an account, Instagram will set the account to private and prevent the teen from messaging strangers.
However, the new AI-powered ‘adult classifier’ tool’s accuracy remains unknown. In a statement to Bloomberg, Meta said that users falsely identified by the software can appeal by sharing government IDs or uploading a selfie.