Microsoft has a new plan to prove what's real and what's AI online

MIT Technology Review 

Microsoft has a new plan to prove what's real and what's AI online A new proposal calls on social media and AI companies to adopt strict verification, but the company hasn't committed to following its own recommendations. There are the high-profile cases you may easily spot, like when White House officials recently shared a manipulated image of a protester in Minnesota and then mocked those asking about it. Other times, it slips quietly into social media feeds and racks up views, like the videos that Russian influence campaigns are currently spreading to discourage Ukrainians from enlisting. It is into this mess that Microsoft has put forward a blueprint, shared with, for how to prove what's real online. An AI safety research team at the company recently evaluated how methods for documenting digital manipulation are faring against today's most worrying AI developments, like interactive deepfakes and widely accessible hyperrealistic models. It then recommended technical standards that can be adopted by AI companies and social media platforms.