Goto

Collaborating Authors

 wardle


AI isn't magic or evil. Here's how to spot AI myths.

Washington Post - Technology News

Humanizing AI systems also stokes our fears, and scared people are more vulnerable to believe and spread wrong information, said Wardle of Brown University. Thanks to science-fiction authors, our brains are brimming with worst-case scenarios, she noted. Stories such as "Blade Runner" or "The Terminator" present a future where AI systems become conscious and turn on their human creators. Since many people are more familiar with sci-fi movies than the nuances of machine-learning systems, we tend to let our imaginations fill in the blanks. By noticing anthropomorphism when it happens, Wardle said, we can guard against AI myths.


Apple Booted the Wordle Copycat Apps, But More Will Come

WIRED

A game developer can file for a patent on an original gaming idea, a legal process that has been used to strangle video game clones in the past. But getting a patent is a long and arduous process that can fall apart if there's "prior art" predating the idea (or if the mechanic could be considered legally "obvious").


Posing as satire, misinformation spreads quickly online

The Japan Times

Hoaxes spread quickly online, be they about celebrities, politicians or anyone else. But falsehoods labeled as satire can slip through the defenses of social media companies, allowing people to peddle fiction as fact, all while making a financial profit. The claims tend to be spectacular: Bill Gates arrested for child trafficking, Tom Hanks executed by the U.S. military, or Pope Francis declaring that a COVID-19 vaccine would be required to enter heaven. These bogus allegations originated from articles on websites that contain disclaimers that they are satirical. The problem is that many people believe them.


Tracking down three billion litres of lost water

BBC News

You'd think that of all the leaks in the country, the one that pops up on the street where a professor of water systems lives would get fixed pretty quickly. But as Vanessa Speight will tell you, that's sadly not the case. "It just comes out of the pavement and runs down the road," says Prof Speight, an expert in drinking water quality at the University of Sheffield. It was roughly a year ago that she first reported the problem to her local water firm. Despite efforts to locate the source of the leak, the company has come up dry. "It's probably been six different times they've dug up the road," Prof Speight adds.


What should newsrooms do about deepfakes? These three things, for starters

#artificialintelligence

Headlines from the likes of The New York Times ("Deepfakes Are Coming. We Can No Longer Believe What We See"), The Wall Street Journal ("Deepfake Videos Are Getting Real and That's a Problem"), and The Washington Post ("Top AI researchers race to detect'deepfake' videos: 'We are outgunned'") would have us believe that clever fakes may soon make it impossible to distinguish truth from falsehood. Deepfakes -- pieces of AI-synthesized image and video content persuasively depicting things that never happened -- are now a constant presence in conversations about the future of disinformation. These concerns have been kicked into even higher gear by the swiftly approaching 2020 U.S. election. A video essay from The Atlantic admonishes us: "Ahead of 2020, Beware the Deepfake."


Clever Tool Uses Apple's Video Game Logic Engine to Protect Macs

WIRED

Between new types of malware, egregious bugs, and universal threats like phishing, Macs are not the invulnerable lockboxes Apple once touted. But in thinking about how to defend Macs against a new generation of threats, researchers at the security firm Digita are taking advantage of features Macs already offer, to monitor threats in unexpected ways. And it's all powered by Apple's logic engine for videogames. At the RSA security conference in San Francisco on Tuesday, Digita chief research officer Patrick Wardle is presenting GamePlan, a tool that watches for potentially suspicious events on Macs and flags them for humans to investigate. The general concept sounds similar to other defense platforms, and hooks into detection mechanisms--has a USB stick been inserted into a machine?


Reverse Engineered Antivirus Detects Classified Documents

#artificialintelligence

A recent, most-excellent post over at the Objective-See blog (seriously, go and read it) details how the author, Patrick Wardle, dissects and manipulates the antivirus (AV) signature mechanism present in the macOS version of a traditional, signature-based antivirus software suite to achieve arbitrary false-positive detection. The flavoring of his post, of course, is the ongoing fracas surrounding the product's alleged potential for misbehavior in identifying and exfiltrating sensitive government documents on a computer protected by the product – a claim the suite's developers deny vehemently. Wardle elects not to comment on it – as do I – choosing instead to ask and answer the question, "Can an AV product be induced to: (1) arbitrarily and incorrectly identify a file as desired by an adversary, and, if (1) then (2) exfiltrate the files identified?" As detailed in the blog, Wardle reversed the AV product's scanning engine's behavior, which enabled him – and presumably any other sufficiently skilled attacker – to modify (he writes'extend') the way in which the product identified malicious files when scanning. Once understood, Wardle utilizes a method for writing bytes into remote processes to patch what the AV engine is looking for. That is to say, Wardle's success is possible because of the product's usage of AV signatures.


AI has no place in the NHS if patient privacy isn't assured

#artificialintelligence

Tech companies are asking to step into doctors' offices with us, and eavesdrop on all the symptoms and concerns we share with our GPs. While doctors and other medical staff are bound by confidentiality and ethics, we haven't yet figured out what it means when a digital third party -- the apps and algorithms -- are allowed in the room, too. Healthcare isn't the place to mimic Facebook's former motto to "move fast and break things", or push regulations to see where they bend, a la Uber. Instead, patients need to trust who's in the consultation room with them, says Nathan Lea, senior research associate at UCL's Institute of Health Informatics and the Farr Institute of Health Informatics Research. "You want the individual to be able to share with the doctor or clinical team as much detail as necessary without the anxiety that someone else will be looking at it," he says.