Collaborating Authors

AAAI AI-Alert for Jun 29, 2022

This Warehouse Robot Reads Human Body Language


Rodney Brooks knows a fair bit about robots. Besides being a pioneer of academic robotics research, he has founded companies that have given the world the robot vacuum cleaner, the bomb disposal bot, and a factory robot anyone can program. Now Brooks wants to introduce another revolutionary type of robot helper--a mobile warehouse robot with the ability to read human body language to tell what workers around it are doing. Robots are increasingly working in close proximity to humans, and finding ways to maximize human-machine teamwork could help companies boost productivity and perhaps lead to new kinds of jobs rather than robots replacing people. But giving robots the ability to read human cues is far from easy.

Australian firm halts facial recognition trial over privacy fears

Al Jazeera

Australia's second-biggest appliances chain says it is pausing a trial of facial recognition technology in stores after a consumer group referred it to the privacy regulator for possible enforcement action. In an email on Tuesday, a spokesperson for JB Hi-Fi Ltd said The Good Guys, which JB Hi-Fi owns, would stop trialling a security system with optional facial recognition in two Melbourne outlets. Use of the technology by The Good Guys, owned by JB Hi-Fi Ltd, was "unreasonably intrusive" and potentially in breach of privacy laws, the group, CHOICE, told the Office of the Australian Information Commissioner (OAIC). While the company took confidentiality of personal information seriously and is confident it complied with relevant laws, it decided "to pause the trial … pending any clarification from the OAIC regarding the use of this technology", JB Hi-Fi's spokesperson added. The Good Guys was named in a complaint alongside Bunnings, Australia's biggest home improvement chain, and big box retailer Kmart, both owned by Wesfarmers Ltd, with total annual sales of about 25 billion Australian dollars ($19.47m) across 800 stores.

Artificially intelligent robot perpetuates racist and sexist prejudice

New Scientist - News

A robot running an artificial intelligence (AI) model carries out actions that perpetuate racist and sexist stereotypes, highlighting the issues that exist when tech learns from data sets with inherent biases.

AI-powered robot learned to make letters out of Play-Doh on its own

New Scientist

A robot has learned how to mould modelling clay into letters that it has never seen before. Creating complex shapes out of doughy materials is a skill that could be put to use in the future in the form of a dumpling-making robot chef. "Deformable objects are ubiquitous in our daily life," says Yunzhu Li at the Massachusetts Institute of Technology. Robots capable of gently handling such objects could one day cook, do housework or even help care for elderly people, he says.

Small robots can't move by themselves but slide when they team up

New Scientist - News

Small robots that have two flapping arms and can't move around on their own can spontaneously link up and glide together instead. This self-organisation may be related to how complex structures arise from simple building blocks in nature. Daniel Goldman at the Georgia Institute of Technology in Atlanta and his colleagues used small robots called smarticles – short for "smart active particles" – to observe self-organisation in the lab.

Language Models

Communications of the ACM

A transformer has strong language representation ability; a very large corpus contains rich language expressions (such unlabeled data can be easily obtained) and training large-scale deep learning models has become more efficient. Therefore, pre-trained language models can effectively represent a language's lexical, syntactic, and semantic features. Pre-trained language models, such as BERT and GPTs (GPT-1, GPT-2, and GPT-3), have become the core technologies of current NLP. Pre-trained language model applications have brought great success to NLP. "Fine-tuned" BERT has outperformed humans in terms of accuracy in language-understanding tasks, such as reading comprehension.8,17 "Fine-tuned" GPT-3 has also reached an astonishing level of fluency in text-generation tasks.3

Technical Perspective: Evaluating Sampled Metrics Is Challenging


Item recommendation algorithms rank the items in a catalogue from the most relevant to the least relevant ones for a given context (for example, query) provided in input. Such algorithms are a key component of our daily interactions with digital systems, and their diffusion in the society will only increase in the foreseeable future. Given the diffusion of recommendation systems, their comparison is a crucial endeavor. Item recommendation algorithms are usually compared using some metric (for example, average precision) that depends on the position of the truly relevant items in the ranking, produced by the algorithm, of all the items in a catalogue. The experimental evaluation and comparison of algorithms is far from easy.

Microsoft limits access to facial recognition tool in AI ethics overhaul

The Guardian > Business

Microsoft is overhauling its artificial intelligence ethics policies and will no longer let companies use its technology to do things such as infer emotion, gender or age using facial recognition technology, the company has said. As part of its new "responsible AI standard", Microsoft says it intends to keep "people and their goals at the centre of system design decisions". The high-level principles will lead to real changes in practice, the company says, with some features being tweaked and others withdrawn from sale. Microsoft's Azure Face service, for instance, is a facial recognition tool that is used by companies such as Uber as part of their identity verification processes. Now, any company that wants to use the service's facial recognition features will need to actively apply for use, including those that have already built it into their products, to prove they are matching Microsoft's AI ethics standards and that the features benefit the end user and society.

Open-source language AI challenges big tech's models


Researchers have warned against possible harms from AI that processes and generates text.Credit: Getty An international team of around 1,000 largely academic volunteers has tried to break big tech's stranglehold on natural-language processing and reduce its harms. Trained with US$7-million-worth of publicly funded computing time, the BLOOM language model will rival in scale those made by firms Google and OpenAI, but will be open-source. BLOOM will also be the first model of its scale to be multilingual. The collaboration, called BigScience, launched an early version of the model on 17 June, and hopes that it will ultimately help to reduce harmful outputs of artificial intelligence (AI) language systems. Models that recognize and generate language are increasingly used by big tech firms in applications from chat bots to translators, and can sound so eerily human that a Google engineer this month claimed that the firm's AI model was sentient (Google strongly denies that the AI possesses sentience).