Rodney Brooks knows a fair bit about robots. Besides being a pioneer of academic robotics research, he has founded companies that have given the world the robot vacuum cleaner, the bomb disposal bot, and a factory robot anyone can program. Now Brooks wants to introduce another revolutionary type of robot helper--a mobile warehouse robot with the ability to read human body language to tell what workers around it are doing. Robots are increasingly working in close proximity to humans, and finding ways to maximize human-machine teamwork could help companies boost productivity and perhaps lead to new kinds of jobs rather than robots replacing people. But giving robots the ability to read human cues is far from easy.
Australia's second-biggest appliances chain says it is pausing a trial of facial recognition technology in stores after a consumer group referred it to the privacy regulator for possible enforcement action. In an email on Tuesday, a spokesperson for JB Hi-Fi Ltd said The Good Guys, which JB Hi-Fi owns, would stop trialling a security system with optional facial recognition in two Melbourne outlets. Use of the technology by The Good Guys, owned by JB Hi-Fi Ltd, was "unreasonably intrusive" and potentially in breach of privacy laws, the group, CHOICE, told the Office of the Australian Information Commissioner (OAIC). While the company took confidentiality of personal information seriously and is confident it complied with relevant laws, it decided "to pause the trial … pending any clarification from the OAIC regarding the use of this technology", JB Hi-Fi's spokesperson added. The Good Guys was named in a complaint alongside Bunnings, Australia's biggest home improvement chain, and big box retailer Kmart, both owned by Wesfarmers Ltd, with total annual sales of about 25 billion Australian dollars ($19.47m) across 800 stores.
A robot has learned how to mould modelling clay into letters that it has never seen before. Creating complex shapes out of doughy materials is a skill that could be put to use in the future in the form of a dumpling-making robot chef. "Deformable objects are ubiquitous in our daily life," says Yunzhu Li at the Massachusetts Institute of Technology. Robots capable of gently handling such objects could one day cook, do housework or even help care for elderly people, he says.
Small robots that have two flapping arms and can't move around on their own can spontaneously link up and glide together instead. This self-organisation may be related to how complex structures arise from simple building blocks in nature. Daniel Goldman at the Georgia Institute of Technology in Atlanta and his colleagues used small robots called smarticles – short for "smart active particles" – to observe self-organisation in the lab.
A transformer has strong language representation ability; a very large corpus contains rich language expressions (such unlabeled data can be easily obtained) and training large-scale deep learning models has become more efficient. Therefore, pre-trained language models can effectively represent a language's lexical, syntactic, and semantic features. Pre-trained language models, such as BERT and GPTs (GPT-1, GPT-2, and GPT-3), have become the core technologies of current NLP. Pre-trained language model applications have brought great success to NLP. "Fine-tuned" BERT has outperformed humans in terms of accuracy in language-understanding tasks, such as reading comprehension.8,17 "Fine-tuned" GPT-3 has also reached an astonishing level of fluency in text-generation tasks.3
Item recommendation algorithms rank the items in a catalogue from the most relevant to the least relevant ones for a given context (for example, query) provided in input. Such algorithms are a key component of our daily interactions with digital systems, and their diffusion in the society will only increase in the foreseeable future. Given the diffusion of recommendation systems, their comparison is a crucial endeavor. Item recommendation algorithms are usually compared using some metric (for example, average precision) that depends on the position of the truly relevant items in the ranking, produced by the algorithm, of all the items in a catalogue. The experimental evaluation and comparison of algorithms is far from easy.
Microsoft is overhauling its artificial intelligence ethics policies and will no longer let companies use its technology to do things such as infer emotion, gender or age using facial recognition technology, the company has said. As part of its new "responsible AI standard", Microsoft says it intends to keep "people and their goals at the centre of system design decisions". The high-level principles will lead to real changes in practice, the company says, with some features being tweaked and others withdrawn from sale. Microsoft's Azure Face service, for instance, is a facial recognition tool that is used by companies such as Uber as part of their identity verification processes. Now, any company that wants to use the service's facial recognition features will need to actively apply for use, including those that have already built it into their products, to prove they are matching Microsoft's AI ethics standards and that the features benefit the end user and society.
Microsoft has pledged to clamp down on access to AI tools designed to predict emotions, gender, and age from images, and will restrict the usage of its facial recognition and generative audio models in Azure. The Windows giant made the promise on Tuesday while also sharing its so-called Responsible AI Standard, a document [PDF] in which the US corporation vowed to minimize any harm inflicted by its machine-learning software. This pledge included assurances that the biz will assess the impact of its technologies, document models' data and capabilities, and enforce stricter use guidelines. This is needed because – and let's just check the notes here – there are apparently not enough laws yet regulating machine-learning technology use. Thus, in the absence of this legislation, Microsoft will just have to force itself to do the right thing.