Goto

Collaborating Authors

 raise concern


e43739bba7cdb577e9e3e4e42447f5a5-AuthorFeedback.pdf

Neural Information Processing Systems

We thank the reviewers for their time and valuable feedback. Below, we clarify a number of important points raised by the reviewers. Reviewers raise concern on multi-modal embeddings. We will highlight this limitation in Sec. R3 suggests that "the authors can adapt the FOL queries to other We argue the differences in tasks and setups below.


Language models that can search the web hold promise -- but also raise concerns

#artificialintelligence

Did you miss a session at the Data Summit? Language models -- AI systems that can be prompted to write essays and emails, answer questions, and more -- remain flawed in many ways. Because they "learn" to write from examples on the web, including problematic social media posts, they're prone to generating misinformation, conspiracy theories, and racist, sexist, or otherwise toxic language. Another major limitation of many of today's language models is that they're "stuck in time," in a sense. Because they're trained once on a large collection of text from the web, their knowledge of the world -- which they gain from that collection -- can quickly become outdated depending on when they were deployed.


Video games on Tesla while driving raise concerns - WBBJ TV

#artificialintelligence

Last August, Vince Patton was watching a YouTube video of a Tesla owner who had made a startling observation: Tesla drivers could now play a video game on their car's touch-screen dashboard – while the vehicle is moving. Curious to see for himself, Patton drove his own 2021 Tesla Model 3 to an empty community college parking lot, activated a game called "Sky Force Reloaded" from a menu and did a few loops. "And I was dumbfounded by that. That just seems so inherently dangerous," said Patton, a 59-year-old retired broadcast journalist who lives near Portland, Oregon. He tried Solitaire, too, and was able to activate that game while driving.


Facial Recognition Software Results in Few Arrests, Raises Concerns

#artificialintelligence

At least 42 law enforcement agencies in Minnesota reportedly used Clearview AI facial recognition software, according to a Buzzfeed investigation. Questions about the softwares reliability and legal standing remain in limbo, according to law enforcement, artificial intelligence, and privacy experts. Clearview AI is a web-based platform that allows users to submit pictures for possible matches in a database of more than 3 billion images pulled from open source websites, including news sites and social media, according to the company's web page. The company also boasted of a 100% accuracy rate at one point, according to a document obtained by a public records request from Buzzfeed. However, questions about the software's reliability and legal standing remain in limbo, according to law enforcement and artificial intelligence and privacy experts.