regulator


Black box problem stunting ML adoption in default risk analysis

#artificialintelligence

Difficulties in explaining machine learning (ML) models is causing concern as banks look to the technology for default risk analysis, according to market participants. "Many different types of'black-box' models have been developed out there even by banks claiming that they can accurately predict mortgage defaults. This is only partially true," said Panos Skliamis, chief executive officer at SPIN Analytics in an email. "[These models] usually target a relatively short-term horizon and their validation windows of testing remain actually in an environment too similar to that of the development samples. However, mortgage loans are almost always long-term and their lives extend to multiple economic cycles, while the entire world changes over time and several features of ML models severely influenced by these changes of the environment," he said.


Chinese robovan startup aims to go from theme parks to city streets

#artificialintelligence

One of China's newest autonomous vehicle makers, Neolix, recently put self-driving microvans into action as it looks to scale up its solution to the country's logistics puzzle made more complex by a surge in online shopping. The Beijing-based startup, barely a year old, has already deployed the vehicles in the capital and other cities, but it faces stiff competition from a crowded field where other players, especially e-commerce groups, are racing to develop similar robovans. "Operating 10,000 units will be an industry milestone and it is crucial [for us] to achieve it," said Yu Enyuan, 45, Neolix's founder and chief executive. Neolix's ambition is to replace the roughly 40 million vehicles providing so-called last-mile logistics in China, a market projected to be 3 trillion yuan ($428 billion). These home deliveries are now handled mainly by two- and three-wheel electric motorbikes, zigzagging through neighborhoods to carry everything from milk tea to mattresses.


Regulator looking at use of facial recognition at King's Cross site

The Guardian

The UK's privacy regulator said it is studying the use of controversial facial recognition technology by property companies amid concerns that its use in CCTV systems at the King's Cross development in central London may not be legal. The Information Commissioner's Office warned businesses using the surveillance technology that they needed to demonstrate its use was "strictly necessary and proportionate" and had a clear basis in law. The data protection regulator added it was "currently looking at the use of facial recognition technology" by the private sector and warned it would "consider taking action where we find non-compliance with the law". On Monday, the owners of the King's Cross site confirmed that facial recognition software was used around the 67-acre, 50-building site "in the interest of public safety and to ensure that everyone who visits has the best possible experience". It is one of the first landowners or property companies in Britain to acknowledge deploying the software, described by a human rights pressure group as "authoritarian", partly because it captures images of people without their consent.


Tesla has a huge incentive to deploy self-driving tech. But is the world ready?

#artificialintelligence

Along with sustainable electric transportation, he views autonomy as a core element of Tesla Inc.'s "fundamental goodness." Humans will be freed of the tedium of driving, he told Wall Street last year. Millions of lives will be saved. There is another incentive for Musk to put driverless cars on the road, though. The day he does that, hundreds of millions of dollars' worth of stored-up revenue become eligible for a trip straight to Tesla's perpetually stressed bottom line.


High-Stakes AI Decisions Need to Be Automatically Audited

#artificialintelligence

Today's AI systems make weighty decisions regarding loans, medical diagnoses, parole, and more. They're also opaque systems, which makes them susceptible to bias. In the absence of transparency, we will never know why a 41-year-old white male and an 18-year-old black woman who commit similar crimes are assessed as "low risk" versus "high risk" by AI software. Oren Etzioni is CEO of the Allen Institute for Artificial Intelligence and a professor in the Allen School of Computer Science at the University of Washington. Tianhui Michael Li is founder and president of Pragmatic Data, a data science and AI training company.


Self-Regulated Interactive Sequence-to-Sequence Learning

arXiv.org Machine Learning

Not all types of supervision signals are created equal: Different types of feedback have different costs and effects on learning. We show how self-regulation strategies that decide when to ask for which kind of feedback from a teacher (or from oneself) can be cast as a learning-to-learn problem leading to improved cost-aware sequence-to-sequence learning. In experiments on interactive neural machine translation, we find that the self-regulator discovers an $\epsilon$-greedy strategy for the optimal cost-quality trade-off by mixing different feedback types including corrections, error markups, and self-supervision. Furthermore, we demonstrate its robustness under domain shift and identify it as a promising alternative to active learning.


Explaining Machine Learning and Artificial Intelligence in Collections to the Regulator

#artificialintelligence

There is significant growth in the application of machine learning (ML) and artificial intelligence (AI) techniques within collections as it has been proven to create countless efficiencies; from enhancing the results of predictive models, to powering AI bots that interact with customers leaving staff free to address more complex issues. At present, one of the major constraining factors to using this advanced technology is the difficulty that comes with explaining the decisions made by these solutions to regulators. This regulatory focus is unlikely to diminish, especially with the various examples of AI bias which continue to be uncovered within various applications, resulting in discriminatory behaviors towards different groups of people. While collections-specific regulations remain somewhat undefined on the subject, major institutions are resorting to their broader policy; namely that any decision needs be fully explainable. Although there are explainable Artificial Intelligence (xAI) techniques that can help us gain deeper insights from ML models such as FICO's xAI Toolkit, the path to achieving sign-off within an organization can be a challenge.


From self-tuning regulators to reinforcement learning and back again

arXiv.org Machine Learning

Machine and reinforcement learning (RL) are being applied to plan and control the behavior of autonomous systems interacting with the physical world -- examples include self-driving vehicles, distributed sensor networks, and agile robots. However, if machine learning is to be applied in these new settings, the resulting algorithms must come with the reliability, robustness, and safety guarantees that are hallmarks of the control theory literature, as failures could be catastrophic. Thus, as RL algorithms are increasingly and more aggressively deployed in safety critical settings, it is imperative that control theorists be part of the conversation. The goal of this tutorial paper is to provide a jumping off point for control theorists wishing to work on RL related problems by covering recent advances in bridging learning and control theory, and by placing these results within the appropriate historical context of the system identification and adaptive control literatures.


An X-ray was once between you and your doctor, but for how long?

#artificialintelligence

A visit to the doctor seems one-on-one. But how will that feeling change when the data gleaned from that interaction takes on unprecedented value? It's a question that doctors and health regulators are grappling with as algorithms learn how to spot pneumonia, and health data becomes the chaff needed to train artificial intelligence. "Previously, the patient is agreeing to supply their very intimate personal information ... to the doctor to help with the diagnosis and management of their own health," said Jacob Jaremko, an associate professor in radiology and diagnostic imaging at the University of Alberta. You provide, for your own care, for your own benefit ... your data."


How Silicon Valley's whiz-kids finally ran out of friends John Naughton

The Guardian

Remember the time when tech companies were cool? Once upon a time, Silicon Valley was the jewel in the American crown, a magnet for high IQ – and predominately male – talent from all over the world. Palo Alto was the centre of what its more delusional inhabitants regarded as the Florence of Renaissance 2.0. Parents swelled with pride when their offspring landed a job with the Googles, Facebooks and Apples of that world, where they stood a sporting chance of becoming as rich as they might have done if they had joined Goldman Sachs or Lehman Brothers, but without the moral odium attendant on investment backing. I mean to say, where else could you be employed by a company to which every president, prime minister and aspirant politician craved an invitation?