Goto

Collaborating Authors

AI-Alerts


Robots with squidgy paws could navigate uneven terrain

New Scientist

Robots could negotiate awkward terrain surefootedly thanks to squidgy paws containing cameras. Tejal Barnwal at the Indian Institute of Technology Bombay, Jørgen Anker Olsen at the Norwegian University of Science and Technology and their colleagues have developed what they call a Terrain Recognition and Contact Force Estimation Paw (TRACEPaw). The bottom part of the foot is half a silicone ball, which deforms as the robot walks.


The real story of the OpenAI debacle is the tyranny of big tech Sarah Radsch

The Guardian

The theatrics of OpenAI's seeming implosion amid the firing of its CEO and co-founder Sam Altman, Microsoft's dramatic offer to poach its top executives and staff, and Altman's triumphant return following the ouster of the board has all the trappings of a Hollywood blockbuster. But the drama unfolding should put the spotlight on the tyranny of the tech titans that control critical aspects of the AI ecosystem. OpenAI has developed some of the most advanced large-language models and pioneering artificial-intelligence products, such as the text generator ChatGPT and image generator Dall-E, which have been responsible for making generative AI into a household term and discussion about AI risks into dinnertime conversation. Although OpenAI is in the spotlight, however, Microsoft has played a leading role in the unfolding drama. Microsoft swooped in to scoop up the ousted executives and create a new AI research division for Altman to lead, with hundreds of staff reportedly ready to follow them.


Finding value in generative AI for financial services

MIT Technology Review

According to a McKinsey report, generative AI could add $2.6 trillion to $4.4 trillion annually in value to the global economy. The banking industry was highlighted as among sectors that could see the biggest impact (as a percentage of their revenues) from generative AI. The technology "could deliver value equal to an additional $200 billion to $340 billion annually if the use cases were fully implemented," says the report. For businesses from every sector, the current challenge is to separate the hype that accompanies any new technology from the real and lasting value it may bring. This is a pressing issue for firms in financial services.


AI cleaning robot can tidy up clothes in a messy bedroom

New Scientist

A robot that can pick up several bits of clothing at once from a pile strewn across the floor could be used to help tidy messy bedrooms. Picking up piles of clothes and grasping multiple items simultaneously may be straightforward for a human, but actions such as working out where the clothes' edges are and how to group them together pose problems for a robot.


Squishy inflatable tubes could make programmable soft robots

New Scientist - News

Inflatable squishy tubes could be used to build soft robots that move when air is pushed through them. Robotic hands made from metal frequently end up crushing delicate objects like fruit when trying to pick them up, so researchers often experiment with making them out of gentler materials. Pierre-Thomas Brun at Princeton University and his colleagues have found that soft, inflatable tubes may just do the trick. The team filled various moulds with a rubber-like material called polyvinyl siloxane that starts off liquid but becomes solid and elastic as time passes.


What the OpenAI drama means for AI progress -- and safety

Nature

OpenAI fired its charismatic chief executive, Sam Altman, on 17 November -- but has now reinstated him.Credit: Justin Sullivan/Getty OpenAI -- the company behind the blockbuster artificial intelligence (AI) bot ChatGPT -- has been consumed by frenzied changes for almost a week. On 17 November, the company fired its charismatic chief executive, Sam Altman. Five days, and much drama, later, OpenAI announced that Altman would return with an overhaul of the company's board. The debacle has thrown the spotlight on an ongoing debate about how commercial competition is shaping the development of AI systems, and how quickly AI can be deployed ethically and safely. "The push to retain dominance is leading to toxic competition. It's a race to the bottom," says Sarah Myers West, managing director of the AI Now Institute, a policy-research organization based in New York City.


Sam Altman's Second Coming Sparks New Fears of the AI Apocalypse?

WIRED

Open AI's new boss is the same as the old boss. But the company--and the artificial intelligence industry--may have been profoundly changed by the past five days of high-stakes soap opera. Sam Altman, OpenAI's CEO, cofounder and figurehead, was removed by the board of directors on Friday. By Tuesday night, after a mass protest by the majority of the startup's staff, Altman was on his way back, and most of the existing board was gone. But that board, mostly independent of OpenAI's operations, bound to a "for the good of humanity" mission statement, was critical to the company's uniqueness.


Judge finds 'reasonable evidence' Tesla knew self-driving tech was defective

The Guardian

A judge has found "reasonable evidence" that Elon Musk and other executives at Tesla knew that the company's self-driving technology was defective but still allowed the cars to be driven in an unsafe manner anyway, according to a recent ruling issued in Florida. Palm Beach county circuit court judge Reid Scott said he'd found evidence that Tesla "engaged in a marketing strategy that painted the products as autonomous" and that Musk's public statements about the technology "had a significant effect on the belief about the capabilities of the products". The ruling, reported by Reuters on Wednesday, clears the way for a lawsuit over a fatal crash in 2019 north of Miami involving a Tesla Model 3. The vehicle crashed into an 18-wheeler truck that had turned on to the road into the path of driver Stephen Banner, shearing off the Tesla's roof and killing Banner. The lawsuit, brought by Banner's wife, accuses the company of intentional misconduct and gross negligence, which could expose Tesla to punitive damages. The ruling comes after Tesla won two product liability lawsuits in California earlier this year focused on alleged defects in its Autopilot system.


E.U.'s AI Regulation Could Be Softened After Pushback From Biggest Members

TIME - Tech

A key aspect of the E.U.'s landmark AI Act could be watered down after the French, German, and Italian governments advocated for limited regulation of the powerful models--known as foundation models--that underpin a wide range of artificial intelligence applications. A document seen by TIME that was shared with officials from the European Parliament and the European Commission by the three biggest economies in the bloc over the weekend proposes that AI companies working on foundation models regulate themselves by publishing certain information about their models and signing up to codes of conduct. There would initially be no punishment for companies that didn't follow these rules, though there might be in future if companies repeatedly violate codes of conduct. They are some of the most powerful, valuable and potentially risky AI systems in existence. Many of the most prominent and hyped AI companies--including OpenAI, Google DeepMind, Anthropic, xAI, Cohere, InflectionAI, and Meta--develop foundation models.


ChatGPT generates fake data set to support scientific hypothesis

Nature

The artificial-intelligence model that powers ChatGPT can create superficially plausible scientific data sets.Credit: Mateusz Slodkowski/SOPA Images/LightRocket via Getty Researchers have used the technology behind the artificial intelligence (AI) chatbot ChatGPT to create a fake clinical-trial data set to support an unverified scientific claim. In a paper published in JAMA Ophthalmology on 9 November1, the authors used GPT-4 -- the latest version of the large language model on which ChatGPT runs -- paired with Advanced Data Analysis (ADA), a model that incorporates the programming language Python and can perform statistical analysis and create data visualizations. The AI-generated data compared the outcomes of two surgical procedures and indicated -- wrongly -- that one treatment is better than the other. "Our aim was to highlight that, in a few minutes, you can create a data set that is not supported by real original data, and it is also opposite or in the other direction compared to the evidence that are available," says study co-author Giuseppe Giannaccare, an eye surgeon at the University of Cagliari in Italy. The ability of AI to fabricate convincing data adds to concern among researchers and journal editors about research integrity.