YouTube will prevent users from commenting on most videos featuring minors, the Google-owned company said on Thursday, in response to growing concerns from users and advertisers that pedophiles were using comments to track and exploit children. The new policy, which comes amid a fierce backlash, will suspend comments not only on videos featuring children under the age of 13 but also on those featuring older minors that could be at risk of attracting predatory behavior. The video platform, which has already removed hundreds of millions of comments and shut down 400 channels in response to the child exploitation scandal, also accelerated the launch of a new and more effective classifier -- that will be able to detect and remove twice as many individual comments. "No form of content that endangers minors is acceptable on YouTube, which is why we have terminated certain channels that attempt to endanger children in any way. Videos encouraging harmful and dangerous challenges targeting any audience are also clearly against our policies," the company said in a blog post announcing the update.
One of our most important responsibilities is keeping children safe on Facebook. We do not tolerate any behavior or content that exploits them online and we develop safety programs and educational resources with more than 400 organizations around the world to help make the internet a safer place for children. For years our work has included using photo-matching technology to stop people from sharing known child exploitation images, reporting violations to the National Center for Missing and Exploited Children (NCMEC), requiring children to be at least 13 to use our services, and limiting the people that teens can interact with after they sign up. Today we are sharing some of the work we've been doing over the past year to develop new technology in the fight against child exploitation. In addition to photo-matching technology, we're using artificial intelligence and machine learning to proactively detect child nudity and previously unknown child exploitative content when it's uploaded.
YouTube uses algorithms and human moderators, but it still couldn't prevent the rise in disturbing, child-exploitative videos on the platform. Well, it's likely due to various reasons -- one of them, according to a BuzzFeed report, is the confusing set of guidelines the company gives its contract workers for rating content. The publication interviewed search quality raters who help train the platform's search AI to surface the best possible results for queries by rating videos. It found that the workers are usually instructed to give videos high ratings based mostly on production values.
Today, Facebook's Global Head of Safety, Antigone Davis, published a blog post outlining how the social network fights child exploitation. The company uses standard industry practices, such as requiring users to be 13 years or older, using photo-matching to identify known images and reporting any violations to the National Center for Missing and Exploited Children (NCMEC). However, the company is also developing new techniques to fight these horrific practices. Facebook is taking advantage of AI and machine learning to proactively find child nudity on its platform and report it to NCMEC. As a result, the company is helping to find exploitative content that was previously unknown.