AAAI AI-Alert for Feb 9, 2021
How Choreography Can Help Robots Come Alive
Consider this scene from the 2014 film, Ex Machina: A young nerd, Caleb, is in a dim room with a scantily clad femmebot, Kyoko. Nathan, a brilliant roboticist, drunkenly stumbles in and brusquely tells Caleb to dance with the Kyoko-bot. To kick things off, Nathan presses a wall-mounted panel and the room lights shift suddenly to an ominous red, while Oliver Cheatham's disco classic "Get Down Saturday Night" starts to play. Kyoko--who seems to have done this before--wordlessly begins to dance, and Nathan joins his robotic creation in an intricately choreographed bit of pelvic thrusting. The scene suggests that Nathan imbued his robot creation with disco functionality, but how did he choreograph the dance on Kyoko, and why?
How Companies Tried to Use the Pandemic to Get Law Enforcement to Use More Drones
In April, as COVID-19 cases exploded across the U.S. and local officials scrambled for solutions, a police department in Connecticut tried a new way to monitor the spread of the virus. One morning, as masked shoppers lined up 6 feet apart outside Trader Joe's in Westport, the police department flew a drone overhead to observe their social distancing and detect potential coronavirus symptoms, such as high temperature and increased heart rate. According to internal emails, the captain flying the mission wanted to "take advantage" of the store's line. But the store had no heads-up about the flight, and neither did the customers on their grocery runs, even though the drone technology managed to track figures both inside and outside. The drone program was unveiled a week later when the department announced its "Flatten the Curve Pilot Program" in collaboration with the Canadian drone company Draganfly, which was due to last through the summer. But less than 48 hours later after the program's public unveiling, the police department was forced to dump it amid intense backlash from Westport residents.
This is how we lost control of our faces
Deborah Raji, a fellow at nonprofit Mozilla, and Genevieve Fried, who advises members of the US Congress on algorithmic accountability, examined over 130 facial-recognition data sets compiled over 43 years. They found that researchers, driven by the exploding data requirements of deep learning, gradually abandoned asking for people's consent. This has led more and more of people's personal photos to be incorporated into systems of surveillance without their knowledge. It has also led to far messier data sets: they may unintentionally include photos of minors, use racist and sexist labels, or have inconsistent quality and lighting. The trend could help explain the growing number of cases in which facial-recognition systems have failed with troubling consequences, such as the false arrests of two Black men in the Detroit area last year.
Fractals can help AI learn to see more clearly--or at least more fairly
Most image-recognition systems are trained using large databases that contain millions of photos of everyday objects, from snakes to shakes to shoes. With repeated exposure, AIs learn to tell one type of object from another. Now researchers in Japan have shown that AIs can start learning to recognize everyday objects by being trained on computer-generated fractals instead. It's a weird idea but it could be a big deal. Generating training data automatically is an exciting trend in machine learning.
Boston Dynamics adds an 'arm' to its robotic dog Spot
After listening to early adopters, Boston Dynamics gave its robot dog a hardware boost and extended WiFi capabilities. It can be controlled remotely using the company's new web browser-based interface, Scout. It's the first Boston Dynamics device equipped with self-charging capabilities and a dock, which means it can be deployed for longer-term missions "with little to no human interaction," Boston Dynamics said. The previous version of Spot had around 90 minutes of battery life before requiring a manual charge.
AI chat bots can bring you back from the dead, sorta
The idea of chatbots based on dead people raises several ethical questions surrounding privacy. People only share so much on social media, so algorithms relying on that would be flawed. Humans are also highly complex and influenced by experiences that aren't always shared via text messages. Microsoft's patent suggests that the company could use crowdsourced data to fill in any gaps. In other words, the resulting chatbot could end up saying things the person never said.
How Censorship Can Influence Artificial Intelligence
Artificial intelligence is hardly confined by international borders, as businesses, universities, and governments tap a global pool of ideas, algorithms, and talent. Yet the AI programs that result from this global gold rush can still reflect deep cultural divides. New research shows how government censorship affects AI algorithms--and can influence the applications built with those algorithms. Margaret Roberts, a political science professor at UC San Diego, and Eddie Yang, a PhD student there, examined AI language algorithms trained on two sources: the Chinese-language version of Wikipedia, which is blocked within China; and Baidu Baike, a similar site operated by China's dominant search engine, Baidu, that is subject to government censorship. Baidu did not respond to a request for comment.
Clearview AI's Facial Recognition App Called Illegal in Canada
The facial recognition app Clearview AI is not welcome in Canada and the company that developed it should delete Canadians' faces from its database, the country's privacy commissioner said on Wednesday. "What Clearview does is mass surveillance, and it is illegal," Commissioner Daniel Therrien said at a news conference. He forcefully denounced the company as putting all of society "continually in a police lineup." Though the Canadian government does not have legal authority to enforce photo removal, the position -- the strongest one an individual country has taken against the company -- was clear: "This is completely unacceptable." Clearview scraped more than three billion photos from social media networks and other public websites in order to build a facial recognition app that is now used by over 2,400 U.S. law enforcement agencies, according to the company.
Machine learning made easy for optimizing chemical reactions
The optimization of reactions used to synthesize target compounds is pivotal to chemical research and discovery, whether in developing a route for manufacturing a life-saving medicine1 or unlocking the potential of a new material2. But reaction optimization requires iterative experiments to balance the often conflicting effects of numerous coupled variables, and frequently involves finding the sweet spot among thousands of possible sets of experimental conditions. Expert synthetic chemists currently navigate this expansive experimental void using simplified model reactions, heuristic approaches and intuition derived from observation of experimental data3. Writing in Nature, Shields et al.4 report machine-learning software that can optimize diverse classes of reaction with fewer iterations, on average, than are needed by humans. Machine learning has emerged as a useful tool for various aspects of chemical synthesis, because it is ideally suited to extrapolating predictive models that are used to solve synthetic problems by recognizing patterns in multidimensional data sets5.