AI-Alerts


Google open-sources framework that reduces AI training costs by up to 80%

#artificialintelligence

Google researchers recently published a paper describing a framework -- SEED RL -- that scales AI model training to thousands of machines. They say that it could facilitate training at millions of frames per second on a machine while reducing costs by up to 80%, potentially leveling the playing field for startups that couldn't previously compete with large AI labs. Training sophisticated machine learning models in the cloud remains prohibitively expensive. According to a recent Synced report, the University of Washington's Grover, which is tailored for both the generation and detection of fake news, cost $25,000 to train over the course of two weeks. OpenAI racked up $256 per hour to train its GPT-2 language model, and Google spent an estimated $6,912 training BERT, a bidirectional transformer model that redefined the state of the art for 11 natural language processing tasks.


There Is a Racial Divide in Speech-Recognition Systems, Researchers Say

#artificialintelligence

The study tested five publicly available tools from Apple, Amazon, Google, IBM and Microsoft that anyone can use to build speech recognition services. These tools are not necessarily what Apple uses to build Siri or Amazon uses to build Alexa. But they may share underlying technology and practices with services like Siri and Alexa. Each tool was tested last year, in late May and early June, and they may operate differently now. The study also points out that when the tools were tested, Apple's tool was set up differently from the others and required some additional engineering before it could be tested.


AI Ethics: DNV GL Exec on Why Women Are Key to Ethics Research

#artificialintelligence

"If you look at the key names in the global debate on AI ethics, it is in fact dominated by women who have many different types of backgrounds, not only tech backgrounds." Artificial Intelligence (AI) is the game-changer in the industry, turbocharging new use cases in transportation, law enforcement, e-commerce, retail, healthcare, and entertainment. However, the quick pace of transformation and adoption is not accompanied by concrete industry standards on AI ethics and fairness in Machine Learning algorithms. While ethics in AI have been a dominant narrative for sometime, Big Tech is still seeking ways to design a code of conduct when building ML algorithms. Some tech giants like Microsoft have laid down guidelines to responsible AI and has operationalized responsible AI at scale, others are yet to follow suit.



Portable AI device turns coughing sounds into health data for flu and pandemic forecasting

#artificialintelligence

University of Massachusetts Amherst researchers have invented a portable surveillance device powered by machine learning - called FluSense - which can detect coughing and crowd size in real time, then analyze the data to directly monitor flu-like illnesses and influenza trends. The FluSense creators say the new edge-computing platform, envisioned for use in hospitals, healthcare waiting rooms and larger public spaces, may expand the arsenal of health surveillance tools used to forecast seasonal flu and other viral respiratory outbreaks, such as the COVID-19 pandemic or SARS. Models like these can be lifesavers by directly informing the public health response during a flu epidemic. These data sources can help determine the timing for flu vaccine campaigns, potential travel restrictions, the allocation of medical supplies and more. "This may allow us to predict flu trends in a much more accurate manner," says co-author Tauhidur Rahman, assistant professor of computer and information sciences, who advises Ph.D. student and lead author Forsad Al Hossain.


Global Big Data Conference

#artificialintelligence

Last week, Microsoft gathered experts from academia, civil society, policy making and more to discuss one of the most important topics in tech at the moment: responsible AI (RAI). Microsoft's Data Science and Law Forum in Brussels was the setting for the discussion, which focused on rules for effective governance of AI. Whilst AI governance and regulation may not be everyone's cup of tea, the event covered an array of subjects where this has become a red hot issue, such as the militarization of AI, liability rules in AI systems, facial recognition technology and the future of quantum computing and more. The event also gave Microsoft an opportunity to showcase its strategy around this important area. A few highlights are worth sharing, so let's dig a bit deeper into what Microsoft is doing in RAI, why it's important and what it means for the market moving forward.


DSI Alumni Use Machine Learning to Discover Coronavirus Treatments

#artificialintelligence

Satz and Averso, who met while students at DSI, are deeply committed to using "data for good." The pair has worked together for several years at the intersection of data science and health care and formed EVQLV in December 2019 to use AI to accelerate the speed at which healing is discovered, developed, and delivered. The company has already grown to 12 team members with skills ranging from machine learning and molecular biology to software engineering and antibody design, cloud computing, and clinical development.


You -- yes, you -- can help AI predict the spread of coronavirus

#artificialintelligence

Roni Rosenfeld makes predictions for a living. Typically, he uses artificial intelligence to forecast the spread of the seasonal flu. But with the coronavirus outbreak claiming lives all over the world, he's switched to predicting the spread of Covid-19. It was the Centers for Disease Control and Prevention (CDC) that asked Rosenfeld to take on this task. As a professor of computer science at Carnegie Mellon University, he leads the machine learning department and the Delphi research group, which aims "to make epidemiological forecasting as universally accepted and useful as weather forecasting is today."


Study shows widely used machine learning methods don't work as claimed

#artificialintelligence

Models and algorithms for analyzing complex networks are widely used in research and affect society at large through their applications in online social networks, search engines, and recommender systems. According to a new study, however, one widely used algorithmic approach for modeling these networks is fundamentally flawed, failing to capture important properties of real-world complex networks. "It's not that these techniques are giving you absolute garbage. They probably have some information in them, but not as much information as many people believe," said C. "Sesh" Seshadhri, associate professor of computer science and engineering in the Baskin School of Engineering at UC Santa Cruz. Seshadhri is first author of a paper on the new findings published March 2 in Proceedings of the National Academy of Sciences.


Answering the Question Why: Explainable AI

#artificialintelligence

The statistical branch of Artificial Intelligence has enamored organizations across industries, spurred an immense amount of capital dedicated to its technologies, and entranced numerous media outlets for the past couple of years. All of this attention, however, will ultimately prove unwarranted unless organizations, data scientists, and various vendors can answer one simple question: can they provide Explainable AI? Although the ability to explain the results of Machine Learning models--and produce consistent results from them--has never been easy, a number of emergent techniques have recently appeared to open the proverbial'black box' rendering these models so difficult to explain. One of the most useful involves modeling real-world events with the adaptive schema of knowledge graphs and, via Machine Learning, gleaning whether they're related and how frequently they take place together. When the knowledge graph environment becomes endowed with an additional temporal dimension that organizations can traverse forwards and backwards with dynamic visualizations, they can understand what actually triggered these events, how one affected others, and the critical aspect of causation necessary for Explainable AI.