media literacy
Reading, writing and … disinformation: should schoolchildren be taught media literacy like maths?
Beneath an old Queenslander on the south side of the Brisbane River, beside a garage with a hand-painted sign that reads "recording" and above a computer in a cluttered spare room, is a Post-it note. The home – "not unlike Bluey's" – belongs to Bryce Corbett and doubles as an unofficial headquarters of the children's news podcast he founded and co-presents, Squiz Kids. Daily episodes tackle a headline story – like South Australia's proposal to ban children from social media – covered to inform, but not frighten, kids. The coating: a bit of fun science, pop culture and, of course, animal stories – the alligator that came to school, the world's funniest crab joke. Corbett's chat, too, is professional yet upbeat.
- Oceania > Australia > South Australia (0.25)
- South America > Brazil (0.15)
- Oceania > Australia > Queensland (0.06)
- (3 more...)
- Media > News (1.00)
- Government (1.00)
- Education > Educational Setting > K-12 Education (0.49)
AI deepfakes are endangering democracy. Here are 4 ways to fight back
With the recent explosion of AI, dazzling images, videos, audio and texts can now be easily generated by anyone with just a few simple inputs. While this technology offers many astonishing benefits, it also poses significant dangers. Among the most pernicious of these is the creation of deepfakes – highly realistic yet manipulated or fabricated content that falsely depicts real people doing or saying things they never did. Our ability to discern fact from fiction, along with democracy itself, are in the crosshairs. In recent months, deepfakes have entered the mainstream like never before.
- Europe > Ukraine (0.15)
- Asia > China (0.15)
- North America > United States > Wisconsin (0.05)
- (2 more...)
- Media (1.00)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- (3 more...)
AI company partners with Bear Grylls on new fact-checking system 'Mission Seekr'
Seekr Technologies CEO Pat Condo spoke with Fox News Digital about a partnership with Bear Grylls to encourage digital media literacy among young people. AI company Seekr and survivalist Bear Grylls are aiming to develop the "survival skill" of digital media literacy through their latest educational platform Mission Seekr. The company originally announced the project in June as an effort to arm the next generation "with critical media literacy tools and the confidence to safely navigate the online landscape." "At Seekr, we're committed to creating a more informed society and empowering people to make smart and educated decisions about the content they consume," Pat Condo, CEO of Seekr Technologies said in a statement. "Together with Bear Grylls, we're embarking on a groundbreaking adventure to develop critical media literacy skills and bring trust to the online experience."
Deepfakes are everywhere. Here's how to spot them
A trickle of AI-fueled misinformation has turned into a powerful stream over the past year, with fake photos and videos--from Donald Trump's and Vladimir Putin's "arrest" to the Pope's "gangsta" outfit--highlighting the scope of the problem. "Deepfake" is an umbrella term for various types of synthetic content, created or altered with the aid of artificial intelligence, which can appear to show events, scenes or conversations that never happened. These types of creations come in a variety of visual, audial, and textual forms and can feature something innocuous, such as Jim Carrey in The Shining, or far more sinister and dangerous--like the fake videos of Joe Biden's "address to the nation," for example. Initially, deepfake technology was largely used to generate pranks and involuntary pornography. Now, it is increasingly deployed as a vehicle for misinformation--scientific, medical, financial, and, perhaps most worryingly, political. Newsweek previously reported on warnings that these technologies already present a real threat and have the potential to upend the democratic process in the 2024 election, with calls growing louder for regulators, big tech, and governments to intervene.
- Media > News (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > North America Government > United States Government (0.54)
Voice deepfakes are getting easier to spot
New research has shown that voice deepfakes are becoming easier to spot as synthetic recreations of real voices, thanks to the anatomy of our vocal tracts. Researchers at the University of Florida have devised a method of simulating images of a human vocal tract's apparent movements (opens in new tab) while a voice clip - real or fake - is played back. Professor of Computer and Information Science and Engineering Patrick Traynor and PhD student Logan Blue wrote that they and their colleagues found that simulations prompted by voice deepfakes weren't constrained by "the same anatomical limitations humans have", with some vocal tract measurements having "the same relative diameter and consistency as a drinking straw". Though scientists are starting to spot voice deepfakes with simulation and anatomical comparison, the risk of an ordinary person being tricked by any deepfake - which could lead to identity theft - remains a problem. Ordinary people don't yet have access to these tools.
Weekly recap: Deepfakes vs. media literacy
A look back at the computer future: Who would have thought it, deepfakes work. This technically outdated deepfake portrait is easy to debunk. In the future, it will be impossible. Two recent studies show that people can hardly distinguish synthetic media – also called "deepfakes" depending on the context – from original media. With simple portrait images, the distinction is almost impossible.
What To Do About Deepfakes
Synthetic media technologies are rapidly advancing, making it easier to generate nonveridical media that look and sound increasingly realistic. So-called "deepfakes" (owing to their reliance on deep learning) often present a person saying or doing something they have not said or done. The proliferation of deepfakesa creates a new challenge to the trustworthiness of visual experience, and has already created negative consequences such as nonconsensual pornography,11 political disinformation,19 and financial fraud.3 Deepfakes can harm viewers by deceiving or intimidating, harm subjects by causing reputational damage, and harm society by undermining societal values such as trust in institutions.7 What can be done to mitigate these harms?
- North America > United States > Virginia > Albemarle County > Charlottesville (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Illinois > Cook County > Evanston (0.04)
- North America > United States > California (0.04)
- Law (0.95)
- Information Technology > Security & Privacy (0.94)
- Government (0.70)
- Media (0.68)
I create "convincing" manipulated images and videos -- but quality may not matter much
I'm part of a larger U.S. government project that is working on developing ways to detect images and videos that have been manipulated. My team's work, though, is to play the role of the bad guy. We develop increasingly devious, and convincing, ways to generate fakes -- in hopes of giving other researchers a good challenge when they're testing their detection methods. For the past three years, we've been having a bit of fun dreaming up new ways to try to change the meaning of images and video. We've created some scenarios ourselves, but we've also had plenty of inspiration from current events and circumstances of actual bad guys trying to twist public opinion.