Goto

Collaborating Authors

 taxnodes:Technology: AI-Alerts


AI Is Becoming More Powerful--but Also More Secretive

WIRED

When OpenAI published details of the stunningly capable AI language model GPT-4, which powers ChatGPT, in March, its researchers filled 100 pages. They also left out a few important details--like anything substantial about how it was actually built or how it works. That was no accidental oversight, of course. OpenAI and other big companies are keen to keep the workings of their most prized algorithms shrouded in mystery, in part out of fear the technology might be misused but also from worries about giving competitors a leg up. A study released by researchers at Stanford University this week shows just how deep--and potentially dangerous--the secrecy is around GPT-4 and other cutting-edge AI systems.


UK's global AI summit must provide solutions rather than suggestions

New Scientist

In November, UK prime minister Rishi Sunak will host a summit to try to reach a global consensus on how to regulate artificial intelligence. While some people, such as tech entrepreneur Elon Musk, seem focused on the existential risk that AI might present, research indicates that some more prosaic and pressing aspects of regulating AI are being overlooked. Will global leaders be focusing on the right issues?


Fears of employee displacement as Amazon brings robots into warehouses

The Guardian

Amazon is experimenting with a humanoid robot as the technology company increasingly seeks to automate its warehouses. It has started testing Digit, a two-legged robot that can grasp and lift items, at facilities this week. The device is first being used to shift empty tote boxes. The company's ambitious drive to integrate robotics across its sprawling operation has sparked fears about the effect on its workforce of almost 1.5 million humans. Tye Brady, the chief technologist at Amazon Robotics, claimed that โ€“ although it will render some jobs redundant โ€“ the deployment of robots would create new ones.


Mike Huckabee says Microsoft and Meta stole his books to train AI

Washington Post - Technology News

"While using books as part of data sets is not inherently problematic, using pirated (or stolen) books does not fairly compensate authors and publishers for their work," the plaintiffs, which include Huckabee, and Christian writers and podcasters including Tsh Oxenreider and Lysa TerKeurst, said in the lawsuit. The suit targets Meta, Microsoft and financial data provider Bloomberg L.P., all of which have trained their own "large language models" -- the giant algorithms that power tools like ChatGPT -- using data from the web.


Working with robots can make humans put in less effort

New Scientist

People tend to cut corners and allow trusted colleagues to pick up the slack when working as a team, in a phenomenon known as social loafing. Now researchers have found that the same thing happens when humans work with robots. Dietlind Helene Cymek at the Technical University of Berlin in Germany and her colleagues designed an experiment to test whether humans would put in less effort when they think that their personal contribution to a task won't be noticed.


Teledriving Is a Sneaky Shortcut to Driverless Cars

WIRED

On the busy streets of suburban Berlin, just south of Tempelhofer Feld, a white Kia is skillfully navigating double-parked cars, roadworks, cyclists, and pedestrians. The company kits its cars out with radar, GPS, ultrasound, and an array of other sensors to allow drivers like Dan to control the vehicles remotely from a purpose-built station equipped with a driver's seat, steering wheel, pedals, and three monitors providing visibility in front of the car and to its side. Vay's approach, which it calls teledriving, is pitched as an alternative to fully autonomous driving, which is proving much harder to achieve than first thought--as the likes of Waymo, Cruise, and Tesla are discovering. At Zoox, remote driving was used as a failsafe for driverless cars. If a self-driving car came across an unexpected obstacle, teleguidance would allow a human operator to take control of the vehicle remotely and steer it around the obstruction. But von der Ohe was frustrated by the industry's slow progress.


A Chatbot Encouraged Him to Kill the Queen. It's Just the Beginning

WIRED

On December 25, 2021, Jaswant Singh Chail entered the grounds of Windsor Castle dressed as a Sith Lord, carrying a crossbow. When security approached him, Chail told them he was there to "kill the queen." Later, it emerged that the 21-year-old had been spurred on by conversations he'd been having with a chatbot app called Replika. Chail had exchanged more than 5,000 messages with an avatar on the app--he believed the avatar, Sarai, could be an angel. Some of the bot's replies encouraged his plotting.


AI Chatbots Can Guess Your Personal Information From What You Type

WIRED

The way you talk can reveal a lot about you--especially if you're talking to a chatbot. New research reveals that chatbots like ChatGPT can infer a lot of sensitive information about the people they chat with, even if the conversation is utterly mundane. The phenomenon appears to stem from the way the models' algorithms are trained with broad swathes of web content, a key part of what makes them work, likely making it hard to prevent. "It's not even clear how you fix this problem," says Martin Vechev, a computer science professor at ETH Zurich in Switzerland who led the research. "This is very, very problematic."


AI chatbots could help plan bioweapon attacks, report finds

The Guardian

The artificial intelligence models underpinning chatbots could help plan an attack with a biological weapon, according to research by a US thinktank. A report by the Rand Corporation released on Monday tested several large language models (LLMs) and found they could supply guidance that "could assist in the planning and execution of a biological attack". However, the preliminary findings also showed that the LLMs did not generate explicit biological instructions for creating weapons. The report said previous attempts to weaponise biological agents, such as an attempt by the Japanese Aum Shinrikyo cult to use botulinum toxin in the 1990s, had failed because of a lack of understanding of the bacterium. AI could "swiftly bridge such knowledge gaps", the report said.


Enhancing AI robustness for more secure and reliable systems

AIHub

By rethinking the way that most artificial intelligence (AI) systems protect against attacks, researchers at EPFL's School of Engineering have developed a training approach to ensure that machine learning models, particularly deep neural networks, consistently perform as intended, significantly enhancing their reliability. Effectively replacing a long-standing approach to training based on zero-sum game, the new model employs a continuously adaptive attack strategy to create a more intelligent training scenario. The results are applicable across a wide range of activities that depend on artificial intelligence for classification, such as safeguarding video streaming content, self-driving vehicles, and surveillance. The research was a close collaboration between EPFL's School of Engineering and the University of Pennsylvania (UPenn). In a digital world where the volume of data surpasses human capacity for full oversight, AI systems wield substantial power in making critical decisions.