Goto

Collaborating Authors

 atleson


FTC warns tech companies against AI shenanigans that harm consumers

Engadget

Since its establishment in 1914, the US Federal Trade Commission has stood as a bulwark against the fraud, deception, and shady dealings that American consumers face every day -- fining brands that "review hijack" Amazon listings, making it easier to cancel magazine subscriptions and blocking exploitative ad targeting. On Monday, Michael Atleson, Attorney, FTC Division of Advertising Practices, laid out both the commission's reasoning for how emerging generative AI systems like ChatGPT, Dall-E 2 could be used to violate the FTC Act's spirit of unfairness, and what it would do to companies found in violation. "Under the FTC Act, a practice is unfair if it causes more harm than good," Atleson said. "It's unfair if it causes or is likely to cause substantial injury to consumers that is not reasonably avoidable by consumers and not outweighed by countervailing benefits to consumers or to competition." He notes that the new generation of chatbots like Bing, Bard and ChatGPT can be used to influence the user's, "beliefs, emotions, and behavior."


FTC stakes out turf as top AI cop: 'Prepared to use all our tools'

FOX News

FOX Business correspondent Lydia Hu has the latest on jobs at risk as AI further develops on "America's Newsroom." The Federal Trade Commission (FTC) is making a play to be a key regulator of artificial intelligence (AI) systems, just as technology heavyweights and policymakers are clamoring for federal government oversight of AI applications. Last week's call for a moratorium on new AI development from tech giants like Elon Musk and Steve Wozniak kick-started a discussion about whether and how the government should step in and put guardrails up around potentially dangerous AI systems. Several lawmakers responded by saying a moratorium would be difficult to impose, leaving a huge gap between calls for action and the realities of how quickly Congress can act. However, the FTC has made it clear over the last week that it is prepared to bridge that gap and take a stab at regulating emerging AI systems. The federal agency tasked with policing "deceptive or unfair business practices" says it has a dog in this fight and is building up a capacity to take on the threats that AI poses to wary consumers.


FTC warns makers of AI software that can be used for fraud • The Register

#artificialintelligence

America's Federal Trade Commission has warned it may crack down on companies that not only use generative AI tools to scam folks, but also those making the software in the first place, even if those applications were not created with that fraud in mind. Now the US government agency is wagging its finger at those using generative machine-learning tools to hoodwink victims into parting with their cash and suchlike as well as the people who made the code to begin with. Commercial software and cloud services, as well as open source tools, can be used to churn out fake images, text, videos, and voices on an industrial scale, which is all perfect for cheating marks. Picture adverts for stuff featuring convincing but faked endorsements by celebrities; that kind of thing is on the FTC's radar. "Evidence already exists that fraudsters can use these tools to generate realistic but fake content quickly and cheaply, disseminating it to large groups or targeting certain communities or specific individuals," Michael Atleson, an attorney for the FTC's division of advertising practices, wrote in a memo this week.