How LinkedIn uses Artificial Intelligence to keep NSFW content out FactorDaily

#artificialintelligence 

When you post something on LinkedIn, chances are that an algorithm made by Rushi Bhatt's team in Bengaluru has checked if it's kosher to be on the professional network. It sounds easy but consider the complexity: LinkedIn has over 560 million members, 20 million companies, millions of job postings and it works in 24 different languages. If all its millions of users seamlessly post on the platform every day, it is because LinkedIn's algorithms, with a lot of help from humans, green-light them before the user can blink an eye. "We have to walk this fine line between freedom of expression and not letting poor content live on the site. That makes it really complicated for everybody, including humans," says Bhatt, an alum of Amazon and Yahoo with a Ph.D. in cognitive and neural systems from Boston University and degrees from the Tata Institute of Fundamental Research and what is today NIT, Surat. At its worst, a poor newsfeed can drive away users. On the other hand, a good one can keep you hooked on a platform for hours. At LinkedIn, it is the job of the "Feed AI" team to maintain fidelity. Bhatt's job is to literally keep the NSFW stuff away. It's a problem almost all major platforms with user-generated content – be it Youtube or Twitter – struggle with.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found