schick
World's first 'certified' deepfake warns viewers not to trust everything they see online
For the last 30 years or so, children have been told not to believe everything they find online, but we may need to now extend this lesson to adults. That's because we are in the midst of a so-called'deepfake' phenomenon, where artificial intelligence (AI) technology is being used to manipulate videos and audio in a way that replicates real life. To help set an example of transparency, the world's first'certified' deepfake video has been released by AI studio Revel.ai. This appears to shows Nina Schick, a professional AI adviser, delivering a warning about how'the lines between real and fiction are becoming blurred'. Of course, it is not really her, and the video has been cryptographically signed by digital authenticity company Truepic, declaring it contains AI-generated content.
90% of online content could be 'generated by AI by 2025,' expert says
Generative AI, like OpenAI's ChatGPT, could completely revamp how digital content is developed, said Nina Schick, adviser, speaker, and A.I. thought leader told Yahoo Finance Live (video above). "I think we might reach 90% of online content generated by AI by 2025, so this technology is exponential," she said. "I believe that the majority of digital content is going to start to be produced by AI. You see ChatGPT... but there are a whole plethora of other platforms and applications that are coming up." The surge of interest in OpenAI's DALL-E and ChatGPT has facilitated a wide-ranging public discussion about AI and its expanding role in our world, particularly generative AI. "ChatGPT has really captured the public imagination in an extremely compelling way, but I think in a few months' time, ChatGPT is just going to be seen as another tool powered by this new form of AI, known as generative AI," she said.
CES: 90 Percent of Hollywood's Content May Be AI-Driven By 2025 – The Hollywood Reporter
Artificial Intelligence is poised to create a seismic shift in entertainment, and the technology isn't just in development. It's arrived and Hollywood needs to be prepared. That was the message of a SAG-AFTRA-hosted CES panel, as AI-driven tools permeated the consumer tech show's exhibition halls. Nina Schick, author and advisor on generative AI, projected that 90 percent of content may be -- at least in part -- AI-generated by 2025. She further predicted that everyone in the audience would be planning to use some form of generative AI within the month.
Can You Tell Whether This Headline Was Written by a Robot?
You probably haven't noticed, but there's a good chance that some of what you've read on the internet was written by robots. And it's likely to be a lot more soon. Artificial-intelligence software programs that generate text are becoming sophisticated enough that their output often can't be distinguished from what people write. And a growing number of companies are seeking to make use of this technology to automate the creation of information we might rely on, according to those who build the tools, academics who study the software, and investors backing companies that are expanding the types of content that can be auto-generated. "It is probably impossible that the majority of people who use the web on a day-to-day basis haven't at some point run into AI-generated content," says Adam Chronister, who runs a small search-engine optimization firm in Spokane, Wash.
- North America > United States > Washington > Spokane County > Spokane (0.24)
- Europe > Netherlands > South Holland > Leiden (0.05)
- Europe > Netherlands > South Holland > The Hague (0.04)
The impact of deepfakes: How do you know when a video is real?
In a world where seeing is increasingly no longer believing, experts are warning that society must take a multi-pronged approach to combat the potential harms of computer-generated media. As Bill Whitaker reports this week on 60 Minutes, artificial intelligence can manipulate faces and voices to make it look like someone said something they never said. The result is videos of things that never happened, called "deepfakes." Often, they look so real, people watching can't tell. Even Justin Bieber has been tricked by a series of deepfake videos on the social media video platform TikTok that appeared to be of Tom Cruise.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.05)
- North America > United States > Illinois > Cook County > Chicago (0.05)
- North America > United States > California (0.05)
- Information Technology > Security & Privacy (1.00)
- Media > News (0.71)
- Law (0.71)
The impact of deepfakes: How do you know when a video is real?
In a world where seeing is increasingly no longer believing, experts are warning that society must take a multi-pronged approach to combat the potential harms of computer-generated media. As Bill Whitaker reports this week on 60 Minutes, artificial intelligence can manipulate faces and voices to make it look like someone said something they never said. The result is videos of things that never happened, called "deepfakes." Often, they look so real, people watching can't tell. Just this month, Justin Bieber was tricked by a series of deepfake videos on the social media video platform TikTok that appeared to be of Tom Cruise.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.05)
- North America > United States > Illinois > Cook County > Chicago (0.05)
- North America > United States > California (0.05)
- Information Technology > Security & Privacy (1.00)
- Media > News (0.71)
- Law (0.71)
Deepfake technology could soon allow anyone to create Hollywood-quality visual effects
Deepfake technology could soon give anybody with a computer or phone the power of a Hollywood special effects department. In the next several years, technologists predict we will all be able to create photo-realistic videos and sound recordings using software enabled by artificial intelligence. That means instead of using cameras and microphones, next-generation "synthetic media'' will be completely generated by computers. Bill Whitaker looks at the state of the art today and volunteers as a guinea pig in an amazing deepfake transformation in which he becomes 30 years younger. The story will be broadcast on the next edition of 60 Minutes, Sunday, October 10 at 7 p.m. ET/PT on CBS. Nina Schick, a London-based researcher and political consultant was advising world leaders on Russian disinformation and election security when she first came across deepfakes. They have only gotten better since then. "The incredible thing about deepfakes and synthetic media is the pace of acceleration when it comes to the technology," Schick says. "By five to seven years, we are basically looking at a trajectory where any single creator -- so a YouTuber, a TikToker -- will be able to create the same level of visual effects that is only accessible to the most well-resourced Hollywood studio today." "It is without a doubt one of the most important revolutions in the future of human communication and perception.
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > Europe Government > Russia Government (0.48)
- Government > Regional Government > Asia Government > Russia Government (0.48)
- Government > Military > Cyberwarfare (0.48)
Deepfakes and the 2020 US elections: what (did not) happen
In retrospect, Nisos experts made the right forecast. However, this was a clear minority opinion. Before and after their report, dozens of politicians and institutions drew considerable attention to the approaching danger: 'imagine a scenario where, on the eve of next year's presidential election, the Democratic nominee appears in a video where he or she endorses President Trump. Now, imagine it the other way around.' (Sprangler, 2019). It is fair to say that deepfakes' high potential for disinformation was noticed long before these hypothetical consequences were evoked, mainly because they were revealed to be highly credible. Two examples: 'In an online quiz, 49 percent of people who visited our site said they incorrectly believed Nixon's synthetically altered face was real and 65 percent thought his voice was real' (Panetta et al, 2020), or'Two-thirds of participants believed that one day it would be impossible to discern a real video from a fake one.
- Asia > Russia (0.46)
- Europe > United Kingdom (0.28)
- Asia > North Korea (0.28)
- (12 more...)
- Media > News (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Voting & Elections (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
AI Speeds Patent Process, But Robot Attorneys Still a Ways Off
Marc Kaufman typically needs 20 hours to create one patent application for a software innovation. But the Washington-based patent attorney says he's saving valuable time with a little high-tech help. Kaufman, a partner at Rimon PC, has been using an artificial intelligence tool called Specifio that analyzes claims--words that define an invention--and generates a bare-bones draft with a description and illustrations in a couple of minutes. He says the tool he started using six months ago has given him five extra hours to fine-tune his clients' applications to boost their likelihood of approval. "Saving hours allows me to really understand the client's business and be very strategic with the patent application, while coming in at a price that clients are willing to pay," Kaufman said.
- North America > United States > California > San Francisco County > San Francisco (0.05)
- Asia > China (0.05)
The QWERTY Days Are Almost Over
Pretty soon, though, something has to give. The most interesting internet-connected gadgets today aren't evolved typewriters with Intel inside. They're watches, headsets and fridges, without space for the roomy keyboards we love. Even our ultrathin phones can barely serve up a QWERTY experience. That's why tech companies had to invent autocorrect.