ai-generated deepfake
Microsoft joins coalition to scrub revenge and deepfake porn from Bing
Microsoft announced it has partnered with StopNCII to help remove non-consensual intimate images -- including deepfakes -- from its Bing search engine. When a victim opens a "case" with StopNCII, the database creates a digital fingerprint, also called a "hash," of an intimate image or video stored on that individual's device without their needing to upload the file. The hash is then sent to participating industry partners, who can seek out matches for the original and remove them from their platform if it breaks their content policies. The process also applies to AI-generated deepfakes of a real person. Several other tech companies have agreed to work with StopNCII to scrub intimate images shared without permission.
Senators introduce bill to protect individuals against AI-generated deepfakes
Today, a group of senators introduced the NO FAKES Act, a law that would make it illegal to create digital recreations of a person's voice or likeness without that individual's consent. Amy Klobuchar (D-Minn.) and Thom Tillis (R-N.C.), fully titled the Nurture Originals, Foster Art, and Keep Entertainment Safe Act of 2024. If it passes, the NO FAKES Act would create an option for people to seek damages when their voice, face or body are recreated by AI. Both individuals and companies would be held liable for producing, hosting or sharing unauthorized digital replicas, including ones made by generative AI. We've already seen many instances of celebrities finding their imitations of themselves out in the world.
Keep these tips in mind to avoid being duped by AI-generated deepfakes
Rep. Jay Obernolte was selected to lead the House task force on AI. Fox News Digital speaks with the California Republican about his goals for the panel and his own thoughts about the rapidly advancing technology. AI fakery is quickly becoming one of the biggest problems confronting us online. Deceptive pictures, videos and audio are proliferating as a result of the rise and misuse of generative artificial intelligence tools. With AI deepfakes cropping up almost every day, depicting everyone from Taylor Swift to Donald Trump, it's getting harder to tell what's real from what's not.
- North America > United States > California (0.25)
- Europe > Ukraine (0.16)
- North America > United States > District of Columbia > Washington (0.05)
- Media (1.00)
- Information Technology > Security & Privacy (1.00)
- Government (1.00)
Princess of Wales photo furore underlines sensitivity around image doctoring
At a time when suspicion of manipulated media has reached a new pitch of concern, the Princess of Wales photo furore underlines the sensitivity around image doctoring. Catherine was the subject of an image editing row in 2011 when Grazia adapted a photo of her on her wedding day – but that was before breakthroughs in artificial intelligence put everyone on edge. There has been a deluge of AI-generated deepfakes in recent years, from a video of Volodymyr Zelenskiy telling his soldiers to surrender, to explicit images of Taylor Swift. Historical examples of image manipulation can be clunky – from Argentine footballers clutching handbags to Stalin's missing underlings – but there is now an alarming credibility to AI-generated content. Catherine's attempts to adjust a family photo, amid frenzied social media speculation about her wellbeing, have run straight into widespread concerns about trust in images, text and audio in a year when half the world is going to the polls.
- Europe > United Kingdom > Wales (0.65)
- North America > United States > California (0.05)
- North America > Canada > Ontario > Middlesex County > London (0.05)
- Europe > France (0.05)
- Media (0.91)
- Leisure & Entertainment (0.91)
- Government > Regional Government > Europe Government > United Kingdom Government (0.61)
Microsoft, OpenAI, Google and others agree to combat election-related deepfakes
A coalition of 20 tech companies signed an agreement Friday to help prevent AI deepfakes in the critical 2024 elections taking place in more than 40 countries. OpenAI, Google, Meta, Amazon, Adobe and X are among the businesses joining the pact to prevent and combat AI-generated content that could influence voters. However, the agreement's vague language and lack of binding enforcement call into question whether it goes far enough. The list of companies signing the "Tech Accord to Combat Deceptive Use of AI in 2024 Elections" includes those that create and distribute AI models, as well as social platforms where the deepfakes are most likely to pop up. The signees are Adobe, Amazon, Anthropic, Arm, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap Inc., Stability AI, TikTok, Trend Micro, Truepic and X (formerly Twitter).
- North America > United States > Texas (0.05)
- North America > United States > New Hampshire (0.05)
- North America > United States > California (0.05)
Michigan to pass law demanding transparency in AI-generated political ads
Fox News Flash top headlines are here. Check out what's clicking on Foxnews.com. Michigan is joining an effort to curb deceptive uses of artificial intelligence and manipulated media through state-level policies as Congress and the Federal Elections Commission continue to debate more sweeping regulations ahead of the 2024 elections. Campaigns on the state and federal level will be required to clearly say which political advertisements airing in Michigan were created using artificial intelligence under legislation expected to be signed in the coming days by Gov. Gretchen Whitmer, a Democrat. It also would prohibit use of AI-generated deepfakes within 90 days of an election without a separate disclosure identifying the media as manipulated.
- North America > United States > Michigan (0.88)
- North America > United States > Minnesota (0.06)
- North America > United States > Texas (0.05)
- (9 more...)
The deepfake danger: When it wasn't you on that Zoom call
In August, Patrick Hillman, chief communications officer of blockchain ecosystem Binance, knew something was off when he was scrolling through his full inbox and found six messages from clients about recent video calls with investors in which he had allegedly participated. "Thanks for the investment opportunity," one of them said. "I have some concerns about your investment advice," another wrote. Others complained the video quality wasn't very good, and one even asked outright: "Can you confirm the Zoom call we had on Thursday was you?" With a sinking feeling in his stomach, Hillman realized that someone had deepfaked his image and voice well enough to hold 20-minute "investment" Zoom calls trying to convince his company's clients to turn over their Bitcoin for scammy investments.
- Information Technology > Security & Privacy (1.00)
- Banking & Finance > Trading (1.00)