black-box algorithm
- Europe > France > Auvergne-Rhône-Alpes > Isère > Grenoble (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > France > Île-de-France > Paris > Paris (0.04)
- Education > Educational Setting > Online (0.48)
- Energy > Power Industry (0.45)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (0.83)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (0.34)
- Europe > France > Auvergne-Rhône-Alpes > Isère > Grenoble (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > France > Île-de-France > Paris > Paris (0.04)
- Education > Educational Setting > Online (0.48)
- Energy > Power Industry (0.45)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (0.83)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (0.34)
The Dangers Of Ai
We've all seen movies where AI takes over the world (I, Robot is probably my favorite) but what are the potential harms of it in the current day. Let's try and understand from where can these dangers arise in the first place. Modern AI uses various black-box algorithms where they get the desired results but the reasoning for it performing better or equivalent to humans might be lost in the process or rarely ever evaluated. Now you might be wondering if we control the results, how is it going to take over the world, the answer is it probably won't. What can go wrong though, is its ability to obtain results wanted by companies or organizations by crossing moral or legal boundaries without anybody knowing or realizing not even the companies themselves.
Explainable artificial intelligence: Easier said than done - STAT
The growing use of artificial intelligence in medicine is paralleled by growing concern among many policymakers, patients, and physicians about the use of black-box algorithms. In a nutshell, it's this: We don't know what these algorithms are doing or how they are doing it, and since we aren't in a position to understand them, they can't be trusted and shouldn't be relied upon. A new field of research, dubbed explainable artificial intelligence (XAI), aims to address these concerns. As we argue in Science magazine, together with our colleagues I. Glenn Cohen and Theodoros Evgeniou, this approach may not help and, in some instances, can hurt. Artificial intelligence (AI) systems, especially machine learning (ML) algorithms, are increasingly pervasive in health care.
- North America > United States (0.15)
- North America > Canada > Ontario > Toronto (0.15)
Explaining Black-Box Algorithms Using Probabilistic Contrastive Counterfactuals
There has been a recent resurgence of interest in explainable artificial intelligence (XAI) that aims to reduce the opaqueness of AI-based decision-making systems, allowing humans to scrutinize and trust them. Prior work in this context has focused on the attribution of responsibility for an algorithm's decisions to its inputs wherein responsibility is typically approached as a purely associational concept. In this paper, we propose a principled causality-based approach for explaining black-box decision-making systems that addresses limitations of existing methods in XAI. At the core of our framework lies probabilistic contrastive counterfactuals, a concept that can be traced back to philosophical, cognitive, and social foundations of theories on how humans generate and select explanations. We show how such counterfactuals can quantify the direct and indirect influences of a variable on decisions made by an algorithm, and provide actionable recourse for individuals negatively affected by the algorithm's decision.
Australian Authorities Want an AI To Settle Your Divorce
For better or worse, there's a good chance your current love life owes something to automation. Even if you're just hooking up with the occasional Tinder fling (which if you are, no judgment), you're still turning to Tinder's black-box algorithms to pick out that fling for you before turning to more black-box algorithms to pick out the best dingy bar to meet them at before turning to more black-box algorithms to figure out what, exactly, should be your date night lewk. If things get serious further down the line, you might turn to another black-box algorithm to plan your entire damn wedding for you. And if it turns out you got married for all the wrong reasons, it turns out there's another set of black boxes you can plug your details into to settle the details of your divorce. Known as "amica," the service was rolled out yesterday by the Australian government as a way to let soon-to-be-exes "make parenting arrangements" and "divide their money and property" without having to go through the hassle of hiring a lawyer to do the heavy lifting.
- Oceania > Australia (0.88)
- North America > United States (0.05)
- Law (0.74)
- Government > Regional Government > Oceania Government > Australia Government (0.72)
Making AI Human Again: The importance of Explainable AI (XAI)
As the explosion of algorithms and Artificial Intelligence (AI) continues across business and society, we are already facing ethical, regulatory and business-critical issues around how we use the output from machine learning. The issue of who evaluates the decisions made by AI -- if anybody -- is becoming more urgent. "Programs are not products, they are processes… we will never be sure what a process does until we run it -- as occurred recently when Amazon's facial recognition software misidentified 28 members of Congress as criminal suspects." Of course, the history of technology is the story of augmenting human limitations with machinery or tools that enable us to do more than our bodies or minds let us. But are we on the verge of losing control of this vital process?
Using Machine Learning to Guide Cognitive Modeling: A Case Study in Moral Reasoning
Agrawal, Mayank, Peterson, Joshua C., Griffiths, Thomas L.
Large-scale behavioral datasets enable researchers to use complex machine learning algorithms to better predict human behavior, yet this increased predictive power does not always lead to a better understanding of the behavior in question. In this paper, we outline a data-driven, iterative procedure that allows cognitive scientists to use machine learning to generate models that are both interpretable and accurate. We demonstrate this method in the domain of moral decision-making, where standard experimental approaches often identify relevant principles that influence human judgments, but fail to generalize these findings to "real world" situations that place these principles in conflict. The recently released Moral Machine dataset allows us to build a powerful model that can predict the outcomes of these conflicts while remaining simple enough to explain the basis behind human decisions.
- Transportation (0.72)
- Health & Medicine > Therapeutic Area (0.47)
The case for open source classifiers in AI algorithms
Dr. Carol Reiley's achievements are too long to list. She co-founded Drive.ai, a self-driving car startup that raised $50 million in its second round of funding last year. Forbes magazine named her one of "20 Incredible Women in AI," and she built intelligent robot systems as a PhD candidate at Johns Hopkins University. But when she built a voice-activated human-robot interface, her own creation couldn't recognize her voice. Dr. Reiley used Microsoft's speech recognition API to build her interface.
- North America > United States > North Carolina > Wake County > Raleigh (0.06)
- North America > United States > Wisconsin (0.05)
- North America > United States > Pennsylvania (0.05)
- (2 more...)
- Information Technology (0.91)
- Law (0.76)
- Transportation (0.58)
Improved Strongly Adaptive Online Learning using Coin Betting
Jun, Kwang-Sung, Orabona, Francesco, Willett, Rebecca, Wright, Stephen
This paper describes a new parameter-free online learning algorithm for changing environments. In comparing against algorithms with the same time complexity as ours, we obtain a strongly adaptive regret bound that is a factor of at least $\sqrt{\log(T)}$ better, where $T$ is the time horizon. Empirical results show that our algorithm outperforms state-of-the-art methods in learning with expert advice and metric learning scenarios.
- North America > United States > New York > Suffolk County > Stony Brook (0.04)
- North America > United States > Florida > Broward County > Fort Lauderdale (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Research Report > New Finding (0.48)
- Research Report > Promising Solution (0.34)