Who's to Blame for AI-Generated Harm--Users or Companies?
On the last day of February, NYU professor Gary Marcus published an essay entitled "The threat of automated misinformation is only getting worse." He warned about the easiness with which you can create misinformation backed by fake references using Bing "with the right invocations." Shawn Oakley, dubbed by Marcus as a "jailbreaking expert" said that "standard techniques" suffice to make it work, providing evidence that the threat of automatic AI-generated misinformation at scale is increasing. Marcus shared his findings on Twitter and FoundersFund's Mike Solana responded: My interpretation of Solana's sarcastic tweets is that claiming that an AI model is a dangerous tool for misinformation (or, more generally, harm of some kind) isn't a good argument if you've consciously broken its filters--he implies the problem isn't the tool's nature but your misuse, and thus you're to blame and not the company that created the tool. His "analogy" between Bing Chat and a text editor misses the point (i.e., language models can generate human-sounding misinformation at scale--you can't do that with Microsoft Word but can with Microsoft Bing) but, even if Marcus is right, there's some truth in Solana's implied stance.
Mar-13-2023, 23:38:00 GMT
- Technology: