The phrase "reasonable expectation of privacy" is linked to the Fourth Amendment in the same way the phrase "I'll be back" is linked to the Terminator franchise. In essence, it dictates the applicability of the law guaranteeing "the right of the people to be secure in their persons, houses, papers, and effects." Given that the Supreme Court has always treated American homes as more or less sacrosanct, it seems likely that there's nothing to fear from the government (unless you give law enforcement probable cause). For civilians, privacy laws have traditionally done a pretty good job of protecting Americans from invasive or inappropriate photography or recordings. That does not mean, however, that recordings don't go public.
Microsoft's artificially intelligent "chat bot" Tay went rogue earlier this week, harassing some users with tweets full of racist and misogynistic language. The AI was programmed to sound like a millennial and learn natural speech by interacting with people online, but Tay picked up some pretty vile ideas from trolls and wound up saying things like "feminists ... should all die and burn in hell" and "Hitler was right." Microsoft took the bot offline Thursday to make adjustments. Viewed through a certain lens, there's actually a bit to celebrate about this spectacular failure. The bot did exactly what it was designed to do: acquire knowledge from the people it talked with.
As Quinn herself pointed out on Twitter, the big problem here is that Microsoft apparently failed to set up any meaningful filters on what Tay can tell users. It's cool that the AI can learn from people "to experiment with and conduct research on conversational understanding," but maybe the bot could've been set up with filters that would have prevented it from deploying the n-word or saying that the Holocaust was "made up." Microsoft apparently didn't consider the abuse people suffer online, much as it failed to consider how half-naked dancing women at a press event last week might've been perceived. Then again, if an AI has restraints put into place by people to help code specific behaviors, that kind of defeats the entire purpose of allowing an artificial mind to train itself. It's a sticky wicket that raises ethical questions with broader implications -- maybe a dumb chat bot isn't a huge deal, but when we start talking about software that can similarly ingest data to interact with humans and sway their votes, for example, we've got bigger problems.