Goto

Collaborating Authors

 roy-chowdhury


The hottest party in generative AI is productivity apps

#artificialintelligence

As the search AI chatbot shindigs -- like Microsoft's Bing bot debut and Google's Bard launch -- wind down for now, who knew the hottest, trendiest party in generative AI would be … business productivity apps? After years of being relegated to nerdy, wallflower AI status while self-driving cars, robot dogs and the future of the AI-powered metaverse got the spotlight, generative AI's email-writing, blog-producing, copy-powering abilities are suddenly popular. And top companies from startups to Big Tech are developing tools to gain admittance to the generative AI bash. Follow VentureBeat's ongoing generative AI coverage Arriving fashionably late to this generative AI soiree is San Francisco-based Grammarly. The digital writing assistant with a browser extension is far from a newbie to the AI space, but today the company announced its GPT-powered, chatbot-style GrammarlyGo. The new offering will start rolling out to its 30 million daily customers in beta in early April, as well as 50,000 teams in Grammarly Business.


Criminals Use Deepfake Videos to Interview for Remote Work

#artificialintelligence

Security experts are on the alert for the next evolution of social engineering in business settings: deepfake employment interviews. The latest trend offers a glimpse into the future arsenal of criminals who use convincing, faked personae against business users to steal data and commit fraud. The concern comes following a new advisory this week from the FBI Internet Crime Complaint Center (IC3), which warned of increased activity from fraudsters trying to game the online interview process for remote-work positions. The advisory said that criminals are using a combination of deepfake videos and stolen personal data to misrepresent themselves and gain employment in a range of work-from-home positions that include information technology, computer programming, database maintenance, and software-related job functions. Federal law-enforcement officials said in the advisory that they've received a rash of complaints from businesses.


Protecting computer vision from adversarial attacks

#artificialintelligence

Advances in computer vision and machine learning have made it possible for a wide range of technologies to perform sophisticated tasks with little or no human supervision. From autonomous drones and self-driving cars to medical imaging and product manufacturing, many computer applications and robots use visual information to make critical decisions. Cities increasingly rely on these automated technologies for public safety and infrastructure maintenance. However, compared to humans, computers see with a kind of tunnel vision that leaves them vulnerable to attacks with potentially catastrophic results. For example, a human driver, seeing graffiti covering a stop sign, will still recognize it and stop the car at an intersection.


Thwarting adversarial AI with context awareness -- GCN

#artificialintelligence

Researchers at the University of California at Riverside are working to teach computer vision systems what objects typically exist in close proximity to one another so that if one is altered, the system can flag it, potentially thwarting malicious interference with artificial intelligence systems. The yearlong project, supported by a nearly $1 million grant from the Defense Advanced Research Projects Agency, aims to understand how hackers target machine-vision systems with adversarial AI attacks. Led by Amit Roy-Chowdhury, an electrical and computer engineering professor at the school's Marlan and Rosemary Bourns College of Engineering, the project is part of the Machine Vision Disruption program within DARPA's AI Explorations program. Adversarial AI attacks – which attempt to fool machine learning models by supplying deceptive input -- are gaining attention. "Adversarial attacks can destabilize AI technologies, rendering them less safe, predictable, or reliable," Carnegie Mellon University Professor David Danks wrote in IEEE's Spectrum in February.