Instagram tightens security on teens' direct messages

More information about who teens are chatting with in direct messages on Instagram, and the extension of the "teen accounts" rules to adult-run profiles featuring children. These are the new security measures Meta has implemented.
Regarding the first new feature, teens will now have new options to view safety tips and block an account, as well as information about the month and year the profile joined Instagram, all visible at the top of new chats. A new block and report option is also coming to direct messages so that people can take action simultaneously.
Mark Zuckerberg's company is also strengthening protections for accounts featuring children but managed by adults, such as parents or talent managers. Meta allows the representation of children under 13 "if it's clear in the account bio that they are managing it," while "if we discover that the account is being managed by the child themselves, we remove it." "While these accounts are mostly used in good faith," the company emphasizes, "unfortunately, there are people who may try to abuse them, leaving sexually explicit comments under their posts or requesting explicit images in direct messages, in clear violation of our rules." To prevent this, Meta is extending some protections for "teen accounts" to these types of profiles. This involves "automatically adding these accounts to our more restrictive messaging settings to prevent unwanted messages and enabling hidden words, which filters out offensive comments." Additionally, Meta will also show these accounts "a notification at the top of their Instagram feed, letting them know that we've updated their security settings and prompting them to also review their account's privacy settings." These features will be introduced in the coming months.
Meta also announced a feature not yet available in Europe: a location alert on Instagram that "lets people know if they're chatting with someone who might be in another country, designed to help protect people from potential scammers who practice sextortion and often lie about where they live."
Since the nudity protection feature was introduced globally, the company further explains, "99% of people, including teens, have kept it turned on, and in June, over 40% of blurry images received in direct messages remained blurred, significantly reducing exposure to unwanted nudity. Nudity protection, which is on by default for teens, also encourages people to think twice before forwarding potentially nude images, and in May, people decided not to forward about 45% of the time after seeing this warning.
Meta also states that since the beginning of 2025, it has removed nearly 135,000 Instagram accounts for leaving sexually explicit comments or requesting sexually explicit images from adult-run accounts featuring children under 13. It has also removed another 500,000 Facebook and Instagram accounts that were linked to these original accounts.
ansa