The governmenbt of Australia is subsidizing the study of responsible, ethical, and inclusive autonomous decision-making technologies. The Australian government is providing AU$31.8 million to the Australian Research Council to study responsible, ethical, and inclusive autonomous decision-making technologies. The Center of Excellence for Automated Decision-Making and Society, which will be based at the Royal Melbourne Institute of Technology (RMIT), will house researchers who will work with experts from seven other Australian universities, as well as 22 academic and industry partner organizations in Australia, Europe, Asia, and the U.S. The global research project aims to ensure machine learning and decision-making technologies can be used safely and ethically. Said RMIT researcher Julian Thomas, "Working with international partners and industry, the research will help Australians gain the full benefits of these new technologies, from better mobility, to improving our responses to humanitarian emergencies."
The latest book by Daniel Kahneman, Olivier Sibony and Cass Sunstein has yet to be released, but the talk about Noise is well underway. Interviews, reviews, and essays have been making the rounds. A lot of ground has been covered. The reports go well beyond discussing the concept of noise, how it differs from bias, why this difference matter -- and how it all ties in with decision-making. Noise, defined as random errors caused by inconsistent decision-making, is said to cost organizations a lot of money. And, if accurate, the dollar amounts tossed around do sound quite alarming.
Machine-learning algorithms can be fooled by small well-designed adversarial perturbations. This is reminiscent of cellular decision-making where ligands (called antagonists) prevent correct signaling, like in early immune recognition. We draw a formal analogy between neural networks used in machine learning and models of cellular decision-making (adaptive proofreading). We apply attacks from machine learning to simple decision-making models and show explicitly the correspondence to antagonism by weakly bound ligands. Such antagonism is absent in more nonlinear models, which inspires us to implement a biomimetic defense in neural networks filtering out adversarial perturbations.
Hutchins and colleagues have quantified these predictions, which are highly accurate with as little as two years of post-publication data, as a novel metric called "Approximate Potential to Translate" (APT). APT values can be used by researchers and decision-makers to focus attention on areas of science that have strong signatures of translational potential. Although numbers alone should never be a substitute for evaluation by human experts, the APT metric has the potential to accelerate biomedical progress as one component of data-driven decision-making. The model that computes APT values makes predictions based upon the content of research articles and the articles that cite them. A long-standing barrier to research and development of metrics like APT is that such citation data has remained hidden behind proprietary, restrictive, and often costly licensing agreements.
Every business needs experts responsible for analyzing pertinent data and helping inform employee decision-making. But many leaders aren't taking full advantage of the analytical tools at their disposal and rely heavily on gut-instinct in situations where data provides a more complete picture. In situations without data or precedent, instinctive decision-making is likely the most viable option. But this strategy is unnecessarily risky in cases where the data shows the outcome of similar situations that have occurred in the past. Educating employees on the historical odds of decisions prevents them from making unnecessarily risky decisions and gives leadership a chance to carefully consult the data and weigh the consequences and costs of failure.