Goto

Collaborating Authors

*


Robot 'chef' can whip up recipes from watching videos of humans cooking food

USATODAY - Tech Top Stories

University of Cambridge engineering researchers created a robot "chef" that can create recipes from watching and analyzing videos of food preparation.


AI Chatbots Are Causing Bank Customers Headaches - CNET

CNET - News

The Consumer Financial Protection Bureau issued a warning on Tuesday on generative AI chatbots being used by banks. The agency says it's received "numerous" complaints from customers who have interacted with the chatbots and have failed to receive "timely, straightforward" answers to their questions. "Working with customers to resolve a problem or answer a question is an essential function for financial institutions – and is the basis of relationship banking," the agency said in its press release. AI chatbots could run the risk of providing inaccurate financial information to customers or infringe on their privacy and data, CFPB said. Artificial intelligence chatbots could run the risk of providing inaccurate financial information to customers or infringe on their privacy and data, the CFPB said.


Robot 'chef' learns to recreate recipes from watching food videos

ScienceDaily > Robotics Research

The researchers, from the University of Cambridge, programmed their robotic chef with a'cookbook' of eight simple salad recipes. After watching a video of a human demonstrating one of the recipes, the robot was able to identify which recipe was being prepared and make it. In addition, the videos helped the robot incrementally add to its cookbook. At the end of the experiment, the robot came up with a ninth recipe on its own. Their results, reported in the journal IEEE Access, demonstrate how video content can be a valuable and rich source of data for automated food production, and could enable easier and cheaper deployment of robot chefs.


Robot farmers? Machines are crawling through America's fields. And some have lasers.

USATODAY - News Top Stories

It uses three high-resolution cameras to peer down at the ground below. Lit by synchronized strobe lights, an onboard computer creates a digital image of each seedling as it glides by, comparing them with all the greenery it might reasonably find in a field of rich Salinas valley farmland two hours south of San Francisco. "It puts a dot on the stem and maps around it," says Todd Rinkenberger of FarmWise, the robot's maker. "Now it knows what's plant. Everything else is a weed."


AI Doomerism Is a Decoy

The Atlantic - Technology

On Tuesday morning, the merchants of artificial intelligence warned once again about the existential might of their products. Hundreds of AI executives, researchers, and other tech and business figures, including OpenAI CEO Sam Altman and Bill Gates, signed a one-sentence statement written by the Center for AI Safety declaring that "mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." Those 22 words were released following a multi-week tour in which executives from OpenAI, Microsoft, Google, and other tech companies called for limited regulation of AI. They spoke before Congress, in the European Union, and elsewhere about the need for industry and governments to collaborate to curb their product's harms--even as their companies continue to invest billions in the technology. Several prominent AI researchers and critics told me that they're skeptical of the rhetoric, and that Big Tech's proposed regulations appear defanged and self-serving.


The existential threat from AI – and from humans misusing it Letters

The Guardian

Regarding Jonathan Freedland's article about AI (The future of AI is chilling – humans have to act together to overcome this threat to civilisation, 26 May), isn't worrying about whether an AI is "sentient" rather like worrying whether a prosthetic limb is "alive"? There isn't even any evidence that "sentience" is a thing. More likely, like life, it is a bunch of distinct capabilities interacting, and "AI" (ie disembodied artificial intellect) is unlikely to reproduce more than a couple of those capabilities. That's because it is an attempt to reproduce the function of just a small part of the human brain: more particularly, of the evolutionarily new part. Our motivation to pursue self-interest comes from a billion years of evolution of the old brain, which AI is not based upon.


The existential threat from AI – and from humans misusing it Letters

The Guardian > Technology

Regarding Jonathan Freedland's article about AI (The future of AI is chilling – humans have to act together to overcome this threat to civilisation, 26 May), isn't worrying about whether an AI is "sentient" rather like worrying whether a prosthetic limb is "alive"? There isn't even any evidence that "sentience" is a thing. More likely, like life, it is a bunch of distinct capabilities interacting, and "AI" (ie disembodied artificial intellect) is unlikely to reproduce more than a couple of those capabilities. That's because it is an attempt to reproduce the function of just a small part of the human brain: more particularly, of the evolutionarily new part. Our motivation to pursue self-interest comes from a billion years of evolution of the old brain, which AI is not based upon.


What the Amazon Alexa settlement means for parents and kids

Washington Post - Technology News

Once you have an Alexa-enabled device like an Amazon Echo, open the Alexa app on your smartphone or tablet and go to Settings Alexa Privacy Manage your Alexa Data Choose how long to save recordings. Select "Don't save recordings" and hit confirm. Delete past recordings in the Alexa Privacy section, including your voice history and history of detected sounds.


Does artificial intelligence pose the risk of human extinction?

Al Jazeera

Tech industry leaders issue a warning as governments consider how to regulate AI without stifling innovation.


Driverless trucks on California highways? Legislators don't trust the DMV to ensure safety

Los Angeles Times

When Teslas are in self-driving mode, they've been recorded crossing into oncoming traffic and hitting parked cars. But what would happen if an 80,000-pound, 18-wheel driverless truck suddenly went off the rails? That's an experiment some California legislators aren't ready to run. They argue that the state Department of Motor Vehicles has so badly mishandled the driverless car industry that it can't be trusted to oversee big rigs barreling down the highways autonomously. AB 316 -- which would wrest control of driverless truck testing and deployment from the DMV and require human drivers in the cab for at least five years while a safety record is collected -- passed in the Assembly on Wednesday.