The phrase "reasonable expectation of privacy" is linked to the Fourth Amendment in the same way the phrase "I'll be back" is linked to the Terminator franchise. In essence, it dictates the applicability of the law guaranteeing "the right of the people to be secure in their persons, houses, papers, and effects." Given that the Supreme Court has always treated American homes as more or less sacrosanct, it seems likely that there's nothing to fear from the government (unless you give law enforcement probable cause). For civilians, privacy laws have traditionally done a pretty good job of protecting Americans from invasive or inappropriate photography or recordings.
Microsoft's artificially intelligent "chat bot" Tay went rogue earlier this week, harassing some users with tweets full of racist and misogynistic language. The AI was programmed to sound like a millennial and learn natural speech by interacting with people online, but Tay picked up some pretty vile ideas from trolls and wound up saying things like "feminists ... should all die and burn in hell" and "Hitler was right." Tay has no blind spots, because Tay is constantly learning from its users. "My point here is that [Tay] can/should serve as an excellent lesson to AI creators to understand how people react writ large to these types of bots and test accordingly to avoid providing an algorithmic evolutionary platform for people to spread hate," Havens added.
As Quinn herself pointed out on Twitter, the big problem here is that Microsoft apparently failed to set up any meaningful filters on what Tay can tell users. It's cool that the AI can learn from people "to experiment with and conduct research on conversational understanding," but maybe the bot could've been set up with filters that would have prevented it from deploying the n-word or saying that the Holocaust was "made up." Microsoft apparently didn't consider the abuse people suffer online, much as it failed to consider how half-naked dancing women at a press event last week might've been perceived. Of course, we talked with Tay on Kik and found it had problems with pretty simple conversation cues, so maybe we don't need to worry about the robot takeover just yet.