Goto

Collaborating Authors

 2022-07


Google fires software engineer who says AI chatbot LaMDA has feelings

#artificialintelligence

Google has fired a senior software engineer who says the company's artificial intelligence chatbot system has feelings. Blake Lemoine, a software engineer and AI researcher, went public last month with his claim that Google's language technology was sentient and should consequently have its "wants" respected. Google has denied Mr Lemoine's suggestion. It has now confirmed he had been dismissed. The tech giant said Mr Lemoine's claims about The Language Model for Dialogue Applications (LaMDA) being sentient were "wholly unfounded", and the company had "worked to clarify that with him for many months".


Using AI to Diagnose Birth Defect in Fetal Ultrasound Images - Neuroscience News

#artificialintelligence

Summary: Using datasets of fetal ultrasounds, a new AI algorithm is able to detect cystic hygroma, a rare embryonic developmental disorder, within the first trimester of pregnancy. In a new proof-of-concept study led by Dr. Mark Walker at the uOttawa Faculty of Medicine, researchers are pioneering the use of a unique AI-based deep learning model as an assistive tool for the rapid and accurate reading of ultrasound images. It's trailblazing work because although deep learning models have become increasingly popular in interpreting medical images and detecting disorders, figuring out how its application can work in obstetric ultrasonography is in its nascent stages. Few AI-enabled studies have been published in this field. The goal of the team's study was to demonstrate the potential for deep-learning architecture to support early and reliable identification of cystic hygroma from first trimester ultrasound scans.


UK government to set out AI regulation plans

#artificialintelligence

The UK government will reveal its plans for regulating artificial intelligence (AI) today, and says it wants to hand more powers to existing regulators to deal with algorithms and automated systems, rather than setting up a dedicated body to look at issues around AI. Plans outlined in a new AI paper, being published this morning, would involve regulators such as the Information Commissioner's Office (ICO) and the Competition and Markets Authority being asked to monitor the impact of AI on their sectors, based on a set of guiding principles. The government says the regulators will be encouraged to take a "light touch" approach to enforcing these principles. The paper will be published this morning when the Data Protection and Digital Information Bill, previously referred to as the Data Reform Bill, which sets the UK's post-Brexit data regime, is introduced in parliament. Full details of the UK AI regulations have yet to be revealed, but the government says its plans will "allow different regulators to take a tailored approach to the use of AI in a range of settings." It claims this "better reflects the growing use of AI in a range of sectors".


This robot dog just taught itself to walk

MIT Technology Review

The team's algorithm, called Dreamer, uses past experiences to build up a model of the surrounding world. Dreamer also allows the robot to conduct trial-and-error calculations in a computer program as opposed to the real world, by predicting potential future outcomes of its potential actions. This allows it to learn faster than it could purely by doing. Once the robot had learned to walk, it kept learning to adapt to unexpected situations, such as resisting being toppled by a stick. "Teaching robots through trial and error is a difficult problem, made even harder by the long training times such teaching requires," says Lerrel Pinto, an assistant professor of computer science at New York University, who specializes in robotics and machine learning.


OSU Uses AI to Save Bees - The Corvallis Advocate

#artificialintelligence

Researchers in the Oregon State University College of Engineering have harnessed the power of artificial intelligence to help protect bees from pesticides.ย  Cory Simon, assistant professor of chemical engineering, andโ€ฏXiaoli Fern, associate professor of computer science, led the project, which involved training a machine learning model to predict whether any proposed new herbicide, fungicide or insecticide would be toxic to honey bees based on the compoundโ€™s molecular structure.ย  The findings, featured on the cover ofโ€ฏThe Journal of Chemical Physicsโ€ฏin a [โ€ฆ]


These robots were trained on AI. They became racist and sexist.

Washington Post - Technology News

Companies have been pouring billions of dollars into developing more robots to help replace humans for tasks such as stocking shelves, delivering goods or even caring for hospital patients. Heightened by the pandemic and a resulting labor shortage, experts describe the current atmosphere for robotics as something of a gold rush. But tech ethicists and researchers are warning that the quick adoption of the new technology could result in unforeseen consequences down the road as the technology becomes more advanced and ubiquitous.


Robot made of sticky tape and metal powder could crawl on your organs

New Scientist

Robots made from sticky tape and dust can morph into various shapes under the direction of a magnetic field. They may one day be able to crawl into computers to fix broken circuits or even inside the human stomach to apply therapeutic patches to gastric ulcers. Soft robots that have no batteries, motors or electronics and that are powered and controlled from a distance by light or magnets are a popular field of research. But there are barriers to overcome before they can be used in practical applications, including the need for a cheap manufacturing process. Zhang Li at the Chinese University of Hong Kong and his colleagues discovered that a magnet-controlled robot can be created easily and at low cost using sticky tape onto which non-sticky wax has been printed in a specific pattern.


Bias in AI has a real impact on business growth. Here's why it needs to be tackled.

MIT Technology Review

Thank you for joining us on "The cloud hub: From cloud chaos to clarity." As organizations across the globe realize the value of artificial intelligence, there is also a growing need to acknowledge the roadblocks and make efforts to remedy them to maximize the impact of the technology. AI experts share their thoughts.


Musk said not one self-driving Tesla had ever crashed. By then, regulators already knew of 8

Los Angeles Times > Business

Elon Musk has long used his mighty Twitter megaphone to amplify the idea that Tesla's automated driving software isn't just safe -- it's safer than anything a human driver can achieve. That campaign kicked into overdrive last fall when the electric-car maker expanded its Full Self-Driving "beta" program from a few thousand people to a fleet that now numbers more than 100,000. The $12,000 feature purportedly lets a Tesla drive itself on highways and neighborhood streets, changing lanes, making turns and obeying traffic signs and signals. As critics scolded Musk for testing experimental technology on public roads without trained safety drivers as backups, Santa Monica investment manager and vocal Tesla booster Ross Gerber was among the allies who sprang to his defense. "There has not been one accident or injury since FSD beta launch," he tweeted in January.


Robot that can perceive its body has self-awareness, claim researchers

New Scientist

A robot can create a model of itself to plan how to move and reach a goal โ€“ something its developers say makes it self-aware, though others disagree. Every robot is trained in some way to do a task, often in a simulation. By seeing what to do, robots can then mimic the task. But they do so unthinkingly, perhaps relying on sensors to try to reduce collision risks, rather than having any understanding of why they are performing the task or a true awareness of where they are within physical space. It means they will often make mistakes โ€“ bashing their arm into an obstacle, for instance โ€“ that humans wouldn't because they would compensate for changes.