#299: On the Novelty Effect in Human-Robot Interaction, with Catharina Vesterager Smedegaard


The typical view is that while something is new, or "a novelty", it will initially make us behave differently than we would normally. But over time, as the novelty wears off, we will likely return to our regular behaviors. For example, a new robot may cause a person to behave differently initially, as its introduced into the person's life, but after some time, the robot won't be as exciting, novel and motivating, and the person might return to their previous behavioral patterns, interacting less with the robot.

Algorithms for Non-negative Matrix Factorization

Neural Information Processing Systems

Non-negative matrix factorization (NMF) has previously been shown to be a useful decomposition for multivariate data. They differ only slightly in the multiplicative factor used in the update rules. One algorithm can be shown to minimize the conventional least squares error while the other minimizes the generalized Kullback-Leibler divergence. The monotonic convergence of both algorithms can be proven using an auxiliary func- tion analogous to that used for proving convergence of the Expectation- Maximization algorithm. The algorithms can also be interpreted as diag- onally rescaled gradient descent, where the rescaling factor is optimally chosen to ensure convergence.

Forecasting jump arrivals in stock prices: new attention-based network architecture using limit order book data


The existing literature provides evidence that limit order book data can be used to predict short-term price movements in stock markets. This paper proposes a new neural network architecture for predicting return jump arrivals one minute ahead in equity markets with high-frequency limit order book data. This new architecture, based on Convolutional Long Short-Term Memory with Attention, is introduced to apply time series representation learning with memory and to focus the prediction attention on the most important features to improve performance. The use of the attention mechanism makes it possible to analyze the importance of the inclusion limit order book data and other input variables. Our architecture with this mechanism is used and compared to existing deep learning architectures with the data set that consists of order book data on five liquid U.S. stocks over 18 months.

r/MachineLearning - [N] Open world RPG with 'dungeon master ai' and 'story engine' in the works using neural network and machine learning


It's very easy to say that your new app or game uses AI. I'm all for people experimenting with AI in these kinds of ways, even if the results are lackluster. But there's a big difference between trying to use AI in a new and experimental way vs using it for hype and marketing. I suppose we'll see when they produce something we can scrutinize, but until then this is just advertising which isn't appropriate in this sub (although there are plenty of video game subs that may be interested in this).

Reducing Risk In AI And Machine Learning-Based Medical Technology


Artificial intelligence and machine learning (AI/ML) are increasingly transforming the healthcare sector. From spotting malignant tumours to reading CT scans and mammograms, AI/ML-based technology is faster and more accurate than traditional devices – or even the best doctors. But along with the benefits come new risks and regulatory challenges. In their latest article Algorithms on regulatory lockdown in medicine recently published in Science, Boris Babic, INSEAD Assistant Professor of Decision Sciences; Theodoros Evgeniou, INSEAD Professor of Decision Sciences and Technology Management; Sara Gerke, Research Fellow at Harvard Law School's Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics; and I. Glenn Cohen, Professor at Harvard Law School and Faculty Director at the Petrie-Flom Center look at the new challenges facing regulators as they navigate the unfamiliar pathways of AI/ML. They consider the questions: What new risks do we face as AI/ML devices are developed and implemented?

KDnuggets News 19:n36, Sep 25: The Hidden Risk of AI and Big Data; The 5 Sampling Algorithms every Data Scientist needs to know - KDnuggets


Data Quality Assessment Is Not All Roses. What Challenges Should You Be Aware Of? 5 Famous Deep Learning Courses/Schools of 2019 12 Deep Learning Researchers and Leaders Data Quality Assessment Is Not All Roses. What Challenges Should You Be Aware Of?

Will White Box AI Eliminate Bias in Machine Learning Algorithms? Probably Not.


The problem identified in this PaymentsSource article is that machine learning tools learn to be biased and that bias is invisible because the machine learning model is a black box; it doesn't divulge how it is making its decisions. But even if the model was a white box solution that clearly identified what data elements were used to make a decision, it isn't clear developers would recognize a biased decision. The problem here is that bias can be encoded so deeply in the data set that it will be very hard to detect. For example, if the data used to train the model is old, no amount of "gender correction" will be sufficient in that women salaries are higher today than in the past. Or if the algorithm identifies access to running water as a key contributor to a decision, will people recognize that non-whites are by far more likely to lack access to clean water and sanitation?

How StreetLight Data uses machine learning to plug cities into the mobility revolution


The mobility revolution may have the potential to transform cities, but in the short term the rise in ride-hailing apps, bike sharing, and electric scooters is giving many local officials fits. A healthy dose of data and machine learning may help get this movement back on track. That's the bet that San Francisco-based StreetLight Data is making. The company is helping cities harness the explosion of data being generated by everything from smart city sensors to mobile phones to new transportation modes, in a bid to reinvent urban planning. As cities groan under rising populations and pollution, making more effective use of data could be the key to making them habitable over the long run.