Microsoft released an AI bot named Tay into the mainstream. Within the first 24 hours, the interaction with humans caused Tay to become racists and make offensive statements. This not only reflects the fallen humanity, but also the political correctness of our day. Secondly, Bloomberg reported on Baby-X, a highly developed A.I. modeled after the biological constructs of the human brain and body. When will these machines gain its own consciousness?
Question: can AI vision systems from Microsoft and Google, which are available for free to anybody, identify NSFW (not safe for work, nudity) images? Can this identification be used to automatically censor images by blacking out or blurring NSFW areas of the image? Method: I spent a few hours creating in some rough code in Microsoft office to find files on my computer and send them to Google Vision and Microsoft Vision so they could be analysed. I spent a few hours over the weekend just knocking some very rough code. Yes, they did reasonably well at (a) identifying images that could need censoring and (b) identifying where on the image things should be blocked out.