Meta, the parent company of Instagram, has announced its plans to introduce new features aimed at safeguarding teenagers and addressing potential risks posed by malicious actors on its platform. This decision comes amidst growing concerns surrounding harmful content and its potential impact on young users.
In a recent statement, Meta disclosed its intention to trial several features geared towards enhancing safety measures on Instagram. One of the key features highlighted is the implementation of technology to blur messages containing nudity, particularly in direct messages, with the aim of protecting teenagers from exposure to inappropriate content. This protection feature will leverage on-device machine learning to analyze images for nudity before they are transmitted. Meta emphasized that this feature would be automatically enabled for users under the age of 18, while also encouraging adults to activate it as an additional layer of protection. Notably, the nudity protection feature will remain effective even in end-to-end encrypted chats, ensuring privacy while prioritizing safety.
Additionally, Meta revealed ongoing efforts to develop technology to identify accounts potentially engaged in sextortion scams, a form of online exploitation. The company also intends to test new pop-up messages to warn users who may have interacted with such accounts, demonstrating a proactive approach to addressing online threats.
These initiatives build upon Meta’s earlier commitment in January to conceal more content from teenage users on both Facebook and Instagram. This initiative aimed to reduce exposure to sensitive topics such as suicide, self-harm, and eating disorders among young users.
Meta’s announcement coincides with increasing legal scrutiny in both the United States and Europe. In October, attorneys general from 33 U.S. states, including California and New York, filed a lawsuit against the company, alleging repeated misinformation regarding the risks associated with its platforms. Similarly, the European Commission has sought clarification from Meta regarding its efforts to protect children from illegal and harmful content.
Overall, Meta’s latest efforts underscore its commitment to enhancing safety measures and protecting teenage users from potential risks on its platforms. By leveraging advanced technology and implementing proactive measures, Meta aims to create a safer and more secure environment for users, particularly teenagers, while addressing concerns raised by regulators and stakeholders.