AAAI AI-Alert for Jan 26, 2021
BioScript
This paper introduces BioScript, a domain-specific language (DSL) for programmable biochemistry that executes on emerging microfluidic platforms. The goal of this research is to provide a simple, intuitive, and type-safe DSL that is accessible to life science practitioners. The novel feature of the language is its syntax, which aims to optimize human readability; the technical contribution of the paper is the BioScript type system. The type system ensures that certain types of errors, specific to biochemistry, do not occur, such as the interaction of chemicals that may be unsafe. Results are obtained using a custom-built compiler that implements the BioScript language and type system. The last two decades have witnessed the emergence of software-programmable laboratory-on-a-chip (pLoC) technology, enabled by technological advances in microfabrication and coupled with scientific understanding of microfluidics, the fundamental science of fluid behavior at the micro- to nanoliter scale. The net result of these collective advancements is that many experimental laboratory procedures have been miniaturized, accelerated, and automated, similar in principle to how the world's earliest computers automated tedious mathematical calculations that were previously performed by hand. Although the vast majority of microfluidic devices are effectively application-specific integrated circuits (ASICs), a variety of programmable LoCs have been demonstrated.16, With a handful of exceptions, research on programming languages and compiler design for programmable LoCs has lagged behind their silicon counterparts. To address this need, this paper presents a domain-specific programming language (DSL) and type system for a specific class of pLoC that manipulate discrete droplets of liquid on a two-dimensional grid. The basic principles of the language and type system readily generalize to programmable LoCs, realized across a wide variety of microfluidic technologies.
This AI Could Go From 'Art' to Steering a Self-Driving Car
You've probably never wondered what a knight made of spaghetti would look like, but here's the answer anyway--courtesy of a clever new artificial intelligence program from OpenAI, a company in San Francisco. The program, DALL-E, released earlier this month, can concoct images of all sorts of weird things that don't exist, like avocado armchairs, robot giraffes, or radishes wearing tutus. OpenAI generated several images, including the spaghetti knight, at WIRED's request. DALL-E is a version of GPT-3, an AI model trained on text scraped from the web that's capable of producing surprisingly coherent text. DALL-E was fed images and accompanying descriptions; in response, it can generate a decent mashup image.
Human rights group urges New York to ban police use of facial recognition
Facial recognition technology amplifies racist policing, threatens the right to protest and should be banned globally, Amnesty International said as it urged New York City to pass a ban on its use in mass surveillance by law enforcement. "Facial recognition risks being weaponised by law enforcement against marginalised communities around the world," said Matt Mahmoudi, AI and human rights researcher at Amnesty. "From New Delhi to New York, this invasive technology turns our identities against us and undermines human rights. "New Yorkers should be able to go out about their daily lives without being tracked by facial recognition. Other major cities across the US have already banned facial recognition, and New York must do the same." Albert Fox Cahn of New York's Urban Justice Centre, which is supporting Amnesty's Ban the Scan campaign, said: "Facial recognition is biased, broken, and antithetical to democracy.
Designing customized 'brains' for robots
"The hang up is what's going on in the robot's head," she adds. Perceiving stimuli and calculating a response takes a "boatload of computation," which limits reaction time, says Neuman, who recently graduated with a PhD from the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). Neuman has found a way to fight this mismatch between a robot's "mind" and body. The method, called robomorphic computing, uses a robot's physical layout and intended applications to generate a customized computer chip that minimizes the robot's response time. The advance could fuel a variety of robotics applications, including, potentially, frontline medical care of contagious patients.
When to expect the real self-driving revolution
This year, new technologies will enable more drivers to take their hands off the wheel while on the road. But that doesn't mean their cars will be fully self-driving -- that day still remains far in the future. Automakers like General Motors (GM), Ford (F) and Stellantis (the company formed in the recent merger of Fiat Chrysler and Groupe PSA) are introducing -- or upgrading existing -- technologies that allow drivers to completely take their hands off the steering wheel and pull their feet away from the pedals for long stretches of time. But these systems will still be limited in their capabilities. Drivers will still be required to pay constant attention to the road, for instance.
Behind those dancing robots, scientists had to bust a move
The man who designed some of the world's most advanced dynamic robots was on a daunting mission: programming his creations to dance to the beat with a mix of fluid, explosive and expressive motions that are almost human. Almost a year and half of choreography, simulation, programming and upgrades that were capped by two days of filming to produce a video running at less than 3 minutes. The clip, showing robots dancing to the 1962 hit "Do You Love Me?" by The Contours, was an instant hit on social media, attracting more than 23 million views during the first week. It shows two of Boston Dynamics' humanoid Atlas research robots doing the twist, the mashed potato and other classic moves, joined by Spot, a doglike robot, and Handle, a wheeled robot designed for lifting and moving boxes in a warehouse or truck. Boston Dynamics founder and chairperson Marc Raibert says what the robot maker learned was far more valuable.
How to train a robot (using AI and supercomputers)
To navigate built environments, robots must be able to sense and make decisions about how to interact with their locale. Researchers at the company were interested in using machine and deep learning to train their robots to learn about objects, but doing so requires a large dataset of images. While there are millions of photos and videos of rooms, none were shot from the vantage point of a robotic vacuum. Efforts to train using images with human-centric perspectives failed. Beksi's research focuses on robotics, computer vision, and cyber-physical systems.