Apple gave a look at its latest accessibility updates during its annual WWDC on Monday, including new voice and and assistive tech features. Initially previewed for Global Accessibility Awareness Day in May, the WWDC announcement confirmed that the new features will launch with iOS 17 and iPadOS 17. For iOS users with cognitive disabilities, Apple's new Assistive Access features lets people customize apps with high contrast buttons and large text labels to meet their individual needs. Apple also added Live Speech and Personal Voice for people who are unable to speak, have trouble speaking or may lose their voice over time. With Live Speech, you can type what you want to say and have it spoken out loud to others on a phone or FaceTime call or jot down commonly used phrases to select during conversation to avoid any delay that comes with typing out in the moment. Personal Voice creates a voice that sounds like you by recording 15 minutes of random phrases.
Buying clothes in person can be a frustrating experience. You go to the fitting room, try on the item, and find you've picked the wrong size. You then have to get dressed, go back onto the shop floor, get the right-sized item, and go through the whole process again in the fitting room. Finally, you find the right item in the right size -- but now you have to wait in a long line to make your purchase. What you thought was going to be a quick and easy procedure has turned into a bit of a slog.
The future of Hollywood looks a lot like Deepfake Ryan Reynolds selling you a Tesla. In a video, since removed but widely shared on Twitter, the actor is bespectacled in thick black frames, his mouth mouthing independently from his face, hawking electric vehicles: "How much do you think it would cost to own a car that's this fucking awesome?" On the verisimilitude scale, the video, which originally circulated last month, registered as blatantly unreal. Then its creator, financial advice YouTuber Kevin Paffrath, revealed he had made it as a ploy to attract the gaze of Elon Musk. Elsewhere on Twitter, people beseeched Reynolds to sue.
Brad Smith, the president of Microsoft, has said that his biggest concern around artificial intelligence was deepfakes, realistic looking but false content. In a speech in Washington aimed at addressing the issue of how best to regulate AI, which went from wonky to widespread with the arrival of OpenAI's ChatGPT, Smith called for steps to ensure that people know when a photo or video is real and when it is generated by AI, potentially for nefarious purposes. "We're going have to address the issues around deepfakes. We're going to have to address in particular what we worry about most foreign cyber influence operations, the kinds of activities that are already taking place by the Russian government, the Chinese, the Iranians," he said. "We need to take steps to protect against the alteration of legitimate content with an intent to deceive or defraud people through the use of AI." Smith also called for licensing for the most critical forms of AI with "obligations to protect security, physical security, cybersecurity, national security".
Criminals are taking advantage of AI technology to conduct misinformation campaigns, commit fraud and obstruct justice through deepfake audio and video. Australia's eSafety Commission has raised concerns about the potential for artificial intelligence (AI) to assist predators in grooming children online as the country debates restrictions on the emerging technology. Australian eSafety Commissioner Julie Inman Grant posted on Twitter that "the manipulative power of generative AI to execute on grooming and sextortion is no longer speculative." "eSafety is already receiving cyberbullying reports and image-based abuse reports around deepfakes," she wrote. "The fact is AI has been'exfiltrated into the wild' without guardrails."
Criminals are taking advantage of AI technology to conduct misinformation campaigns, commit fraud and obstruct justice through deepfake audio and video. While some uses of deepfakes are lighthearted like the pope donning a white Balenciaga puffer jacket or an AI-generated song using vocals from Drake and The Weeknd, they can also sow doubt about the authenticity of legitimate audio and videos. Criminals are taking advantage of the technology to conduct misinformation campaigns, commit fraud and obstruct justice. As artificial intelligence (AI) continues to advance, so does the proliferation of fake content that experts warn could pose a serious threat to various aspects of everyday life if proper controls aren't put in place. AI-manipulated images, videos and audio known as "deepfakes" are often used to create convincing but false representations of people and events.
Sen. Pete Ricketts of Nebraska told Fox News Digital he's concerned about China's use of Artificial Intelligence after a report claimed pro-Chinese groups were spreading CCP propaganda using AI-generated news anchors. Turkish President Recep Tayyip Erdogan's main political opponent accused Russia of using deepfakes and other artificial intelligence (AI)-generated material to meddle in the country's upcoming presidential election. "The Russians have a vested interest in backing an Erdogan presidency to ensure that he basically stays in power, mainly because the Russians benefit [from] driving a wedge between Turkey and NATO, and they've been very successful about that in the last decade or so," Sinan Ciddi, non-resident senior fellow on Turkey at the Foundation for Defense of Democracies, told Fox News Digital. "So, in the last several days, weeks, it has been credibly reported by Turkish sources that Russian bot accounts, Twitter accounts, all sorts of disinformation campaigns have started pressing the thumb down on backing the Erdogan presidency, and that comes as no surprise." The election, scheduled for May 14 alongside parliamentary elections, has proven difficult for Erdogan as his election rival Kemal Kilicdaroglu maintains a slight lead in opinion polls.
Senator Pete Ricketts of Nebraska told Fox News Digital on Thursday that he's concerned about China's use of Artificial Intelligence (AI) after a report claimed pro-Chinese groups were spreading CCP propaganda using AI-generated news anchors. EXCLUSIVE: China's expansive artificial intelligence (AI) operations could play a concerning role in the 2024 election cycle, Sen. Pete Ricketts warned on Thursday. "There's absolutely a possibility that they could do that for the 2024 election, and that's what we have to be on guard [for]," Ricketts told Fox News Digital in an interview in his Senate office. During a Senate Foreign Relations subcommittee hearing earlier this month, Ricketts referenced China and its use of AI technology to create "deepfakes," which are fabricated videos and images that can look and sound like real people and events. A report released earlier this year by a U.S.-based research firm claimed a "pro-Chinese spam operation" was using AI deepfakes technology to create videos of fake news anchors reciting Beijing's propaganda.
Fake AI pictures and videos will be nearly impossible to discern from real images as the technology behind deepfakes advances, University of California, Berkeley professor says. In a nearly unanimous vote, Minnesota Senate lawmakers passed a bill Wednesday that would criminalize people who non-consensually share deepfake sexual images of others, and people who share deepfakes to hurt a political candidate or influence an election. Deepfakes are videos and images that have been digitally created or altered with artificial intelligence or machine learning. Deepfake pornography and political misinformation have been created with the technology since it first began spreading across the internet several years ago. That technology is easier to use now than ever before.
Last week, the Republican National Committee put out a video advertisement against Biden, which featured a small disclaimer in the top left of the frame: "Built entirely with AI imagery." Critics questioned the diminished size of the disclaimer and suggested its limited value, particularly because the ad marks the first substantive use of AI in political attack advertising. As AI-generated media become more mainstream, many have argued that text-based labels, captions, and watermarks are crucial for transparency. But do these labels actually work? For a label to work, it needs to be legible.