Goto

Collaborating Authors

AI-Alerts


Reports of the Association for the Advancement of Artificial Intelligence's 2020 Fall Symposium Series

Interactive AI Magazine

The Association for the Advancement of Artificial Intelligence's 2020 Fall Symposium Series was held virtually from November 11-14, 2020, and was collocated with three symposia postponed from March 2020 due to the COVID-19 Pandemic. There were five symposia in the fall program: AI for Social Good, Artificial Intelligence in Government and Public Sector, Conceptual Abstraction and Analogy in Natural and Artificial Intelligence, Physics-Guided AI to Accelerate Scientific Discovery, and Trust and Explainability in Artificial Intelligence for Human-Robot Interaction. Additionally, there were three symposia delayed from spring: AI Welcomes Systems Engineering: Towards the Science of Interdependence for Autonomous Human-Machine Teams, Deep Models and Artificial Intelligence for Defense Applications: Potentials, Theories, Practices, Tools, and Risks, and Towards Responsible AI in Surveillance, Media, and Security through Licensing. Recent developments in big data and computational power are revolutionizing several domains, opening up new opportunities and challenges. In this symposium, we highlighted two specific themes, namely humanitarian relief, and healthcare, where AI could be used for social good to achieve the United Nations (UN) sustainable development goals (SDGs) in those areas, which touch every aspect of human, social, and economic development. The talks at the symposium were focused on identifying the critical needs and pathways for responsible AI solutions to achieve SDGs, which demand holistic thinking on optimizing the trade-off between automation benefits and their potential side-effects, especially in a year that has upended societies globally due to the COVID-19 pandemic. Riding on the success of the AI for Social Good symposium that was held in Washington, DC, in November 2019, we organized the 2020 version of the symposium.


Artificial Intelligence Helps Diagnose Leukemia

#artificialintelligence

The presence of cancer of the lymphatic system is often determined by analyzing samples from the blood or bone marrow. A team led by Prof. Dr. Peter Krawitz from the University of Bonn had already shown in 2020 that artificial intelligence can help with the diagnosis of such lymphomas and leukemias. The technology fully utilizes the potential of all measurement values and increases the speed as well as the objectivity of the analyses compared to established processes. The method has now been further developed so that even smaller laboratories can benefit from this freely accessible machine learning method - an important step towards clinical practice. The study has now been published in the journal Patterns.


Safety officials are hitting the brakes on Tesla's push for automated cars

#artificialintelligence

Tesla is getting ready to roll out a software upgrade that will allow a select few drivers to use more autonomous driving features in cities. Up to now, the beta versions of driver assistance software made available to thousands of drivers in the US have been designed for the relatively more simple environment of highways. Computer-assisted urban driving would bring Tesla a step closer to CEO Elon Musk's vision of fully self-driving vehicles. But safety officials think the company is getting ahead of itself, and putting drivers at risk. "Basic safety issues have to be addressed before they're then expanding it to other city streets and other areas," Jennifer Homendy, chair of the National Transportation Safety Board, a federal agency that investigates transportation accidents, said in a Sept. 19 interview with The Wall Street Journal.


Major study finds AI is at an "inflection point"

#artificialintelligence

A new report about artificial intelligence and its effects warns AI has reached a turning point and its negative effects can no longer be ignored. The big picture: For all the sci-fi worries about ultra-intelligent machines or wide-scale job loss from automation -- both of which would require artificial intelligence that is far more capable than what has been developed so far -- the larger concern may be about what happens if AI doesn't work as intended. Background: The AI100 project -- which was launched by Eric Horvitz, who served as Microsoft's first chief scientific officer, and is hosted by the Stanford Institute on Human-Centered AI (HAI) -- is meant to provide a longitudinal study of a technology that seems to be advancing by the day. What's happening: The panel found AI has exhibited remarkable progress over the past five years, especially in the area of natural language processing (NLP) -- the ability of AI to analyze and generate human language. The catch: That means AI has reached a point where its downsides in the real world are becoming increasingly difficult to miss -- and increasingly difficult to stop.


Could microscale concave interfaces help self-driving cars read road signs? – Physics World

#artificialintelligence

A structural colour technology that produces concentric rainbows could help autonomous vehicles read road signs, scientists in the US and China claim. As well as exploring the physics of these novel reflective surfaces, the researchers show that they can produce two different image signals at the same time. Autopilot systems that read both signals would be less likely to misinterpret altered road signs, they suggest. Car autopilot systems use infrared laser-based light detection and ranging (lidar) systems to scan their environment and recognize traffic situations. To read signs, autonomous vehicles rely on visible cameras and pattern recognition algorithms.


The Scientist and the A.I.-Assisted, Remote-Control Killing Machine

#artificialintelligence

That afternoon, he and his wife would leave their vacation home on the Caspian Sea and drive to their country house in Absard, a bucolic town east of Tehran, where they planned to spend the weekend. Iran's intelligence service had warned him of a possible assassination plot, but the scientist, Mohsen Fakhrizadeh, had brushed it off. Convinced that Mr. Fakhrizadeh was leading Iran's efforts to build a nuclear bomb, Israel had wanted to kill him for at least 14 years. But there had been so many threats and plots that he no longer paid them much attention. Despite his prominent position in Iran's military establishment, Mr. Fakhrizadeh wanted to live a normal life. And, disregarding the advice of his security team, he often drove his own car to Absard instead of having bodyguards drive him in an armored vehicle. It was a serious breach of security protocol, but he insisted. So shortly after noon on Friday, Nov. 27, he slipped behind the wheel of his black Nissan Teana sedan, his wife in the passenger seat beside him, and hit the road. Since 2004, when the Israeli government ordered its foreign intelligence agency, the Mossad, to prevent Iran from obtaining nuclear weapons, the agency had been carrying out a campaign of sabotage and cyberattacks on Iran's nuclear fuel enrichment facilities.


DRNets can solve Sudoku, speed scientific discovery

#artificialintelligence

Say you're driving with a friend in a familiar neighborhood, and the friend asks you to turn at the next intersection. The friend doesn't say which way to turn, but since you both know it's a one-way street, it's understood. That type of reasoning is at the heart of a new artificial-intelligence framework – tested successfully on overlapping Sudoku puzzles – that could speed discovery in materials science, renewable energy technology and other areas. An interdisciplinary research team led by Carla Gomes, the Ronald C. and Antonia V. Nielsen Professor of Computing and Information Science in the Cornell Ann S. Bowers College of Computing and Information Science, has developed Deep Reasoning Networks (DRNets), which combine deep learning – even with a relatively small amount of data – with an understanding of the subject's boundaries and rules, known as "constraint reasoning." Di Chen, a computer science doctoral student in Gomes' group, is first author of "Automating Crystal-Structure Phase Mapping by Combining Deep Learning with Constraint Reasoning," published Sept. 16 in Nature Machine Intelligence.


Save the Right Whales by Cutting through the Wrong Noise

#artificialintelligence

Fewer than 400 North Atlantic right whales remain in the wild, and not even 100 of them are breeding females. Their biggest survival threats are boat strikes and entanglement in fishing gear. Protecting these whales, such as by diverting boats from dangerous encounters, requires locating them more reliably--and new technology, described in the Journal of the Acoustical Society of America, could help make that possible. To listen for marine life, researchers often deploy underwater microphones called hydrophones on buoys and robotic gliders. The recorded audio is converted into spectrograms: visual representations of sound used to pinpoint, for instance, specific whale species' calls.


MIT: Measuring Media Bias in Major News Outlets With Machine Learning

#artificialintelligence

A study from MIT has used machine learning techniques to identify biased phrasing across around 100 of the largest and most influential news outlets in the US and beyond, including 83 of the most influential print news publications. It's a research effort that shows the way towards automated systems that could potentially auto-classify the political character of a publication, and give readers a deeper insight into the ethical stance of an outlet on topics that they may feel passionately about. The work centers on the way topics are addressed with particular phrasing, such as undocumented immigrant illegal Immigrant, fetus unborn baby, demonstrators anarchists. The project used Natural Language Processing (NLP) techniques to extract and classify such instances of'charged' language (on the assumption that apparently more'neutral' terms also represent a political stance) into a broad mapping that reveals left and right-leaning bias across over three million articles from around 100 news outlets, resulting in a navigable bias landscape of the publications in question. The paper comes from Samantha D'Alonzo and Max Tegmark at MIT's Department of Physics, and observes that a number of recent initiatives around'fact checking', in the wake of numerous'fake news' scandals, can be interpreted as disingenuous and serving the causes of particular interests.


Machine learning is moving beyond the hype

#artificialintelligence

Machine learning has been around for decades, but for much of that time, businesses were only deploying a few models and those required tedious, painstaking work done by PhDs and machine learning experts. Over the past couple of years, machine learning has grown significantly thanks to the advent of widely available, standardized, cloud-based machine learning platforms. Today, companies across every industry are deploying millions of machine learning models across multiple lines of business. Tax and financial software giant Intuit started with a machine learning model to help customers maximize tax deductions; today, machine learning touches nearly every part of their business. In the last year alone, Intuit has increased the number of models deployed across their platform by over 50 percent.