AAAI AI-Alert for Oct 24, 2023
Machine Learning Sensors
The last decade has seen a surge in commercial applications using machine learning (ML). Similarly, marked improvements in latency and bandwidth of wireless communication have led to the rapid adoption of cloud-connected devices, which gained the moniker Internet of Things (IoT). With such technology, it became possible to add intelligence to sensor systems and devices, enabling new technologies such as Amazon Echo, Google Nest, and other so-called "smart devices." However, these devices offer only the illusion of intelligence and are merely vessels for submitting and receiving queries from a centralized cloud infrastructure. This cloud processing leads to concerns about where user data is being stored, what other services it might be used for, and who has access to it.7 More recently, efforts have progressed in dovetailing the domains of IoT and machine learning to embed intelligence directly on the device, known as tiny machine learning (TinyML).10
Legal Challenges to Generative AI, Part II
DALL-E, Midjourney, and Stable Diffusion are among the generative AI technologies widely used to produce images in response to user prompts. The output images are, for the most part, indistinguishable from images humans might have created. Generative AI systems are capable of producing human-creator-like images because of the extremely large quantities of images, paired with textual descriptions of the images' contents, on which the systems' image models were trained. A text prompt to compose a picture of a dog playing with a ball on a beach at sunset will generate a responsive image drawing upon embedded representations of how dogs, balls, beaches, and sunsets are typically depicted and arranged in images of this sort.
AI Is Becoming More Powerful--but Also More Secretive
When OpenAI published details of the stunningly capable AI language model GPT-4, which powers ChatGPT, in March, its researchers filled 100 pages. They also left out a few important details--like anything substantial about how it was actually built or how it works. That was no accidental oversight, of course. OpenAI and other big companies are keen to keep the workings of their most prized algorithms shrouded in mystery, in part out of fear the technology might be misused but also from worries about giving competitors a leg up. A study released by researchers at Stanford University this week shows just how deep--and potentially dangerous--the secrecy is around GPT-4 and other cutting-edge AI systems.
UK's global AI summit must provide solutions rather than suggestions
In November, UK prime minister Rishi Sunak will host a summit to try to reach a global consensus on how to regulate artificial intelligence. While some people, such as tech entrepreneur Elon Musk, seem focused on the existential risk that AI might present, research indicates that some more prosaic and pressing aspects of regulating AI are being overlooked. Will global leaders be focusing on the right issues?
Fears of employee displacement as Amazon brings robots into warehouses
Amazon is experimenting with a humanoid robot as the technology company increasingly seeks to automate its warehouses. It has started testing Digit, a two-legged robot that can grasp and lift items, at facilities this week. The device is first being used to shift empty tote boxes. The company's ambitious drive to integrate robotics across its sprawling operation has sparked fears about the effect on its workforce of almost 1.5 million humans. Tye Brady, the chief technologist at Amazon Robotics, claimed that โ although it will render some jobs redundant โ the deployment of robots would create new ones.
Mike Huckabee says Microsoft and Meta stole his books to train AI
"While using books as part of data sets is not inherently problematic, using pirated (or stolen) books does not fairly compensate authors and publishers for their work," the plaintiffs, which include Huckabee, and Christian writers and podcasters including Tsh Oxenreider and Lysa TerKeurst, said in the lawsuit. The suit targets Meta, Microsoft and financial data provider Bloomberg L.P., all of which have trained their own "large language models" -- the giant algorithms that power tools like ChatGPT -- using data from the web.
Working with robots can make humans put in less effort
People tend to cut corners and allow trusted colleagues to pick up the slack when working as a team, in a phenomenon known as social loafing. Now researchers have found that the same thing happens when humans work with robots. Dietlind Helene Cymek at the Technical University of Berlin in Germany and her colleagues designed an experiment to test whether humans would put in less effort when they think that their personal contribution to a task won't be noticed.
Teledriving Is a Sneaky Shortcut to Driverless Cars
On the busy streets of suburban Berlin, just south of Tempelhofer Feld, a white Kia is skillfully navigating double-parked cars, roadworks, cyclists, and pedestrians. The company kits its cars out with radar, GPS, ultrasound, and an array of other sensors to allow drivers like Dan to control the vehicles remotely from a purpose-built station equipped with a driver's seat, steering wheel, pedals, and three monitors providing visibility in front of the car and to its side. Vay's approach, which it calls teledriving, is pitched as an alternative to fully autonomous driving, which is proving much harder to achieve than first thought--as the likes of Waymo, Cruise, and Tesla are discovering. At Zoox, remote driving was used as a failsafe for driverless cars. If a self-driving car came across an unexpected obstacle, teleguidance would allow a human operator to take control of the vehicle remotely and steer it around the obstruction. But von der Ohe was frustrated by the industry's slow progress.
A Chatbot Encouraged Him to Kill the Queen. It's Just the Beginning
On December 25, 2021, Jaswant Singh Chail entered the grounds of Windsor Castle dressed as a Sith Lord, carrying a crossbow. When security approached him, Chail told them he was there to "kill the queen." Later, it emerged that the 21-year-old had been spurred on by conversations he'd been having with a chatbot app called Replika. Chail had exchanged more than 5,000 messages with an avatar on the app--he believed the avatar, Sarai, could be an angel. Some of the bot's replies encouraged his plotting.