Issues: Overviews


Robots in the Boardroom

#artificialintelligence

Steamships, electricity, the railroad, airplanes, the internet--technology and business have always been intertwined. Now a new tech revolution is pushing forward as organizations figure out how to use artificial intelligence to help them make faster, smarter, and more productive decisions. This recent research from Harvard Business School faculty marks the possibilities and pitfalls that could occur along this digital transformation. Ethics Bots and Other Ways to Move Your Code of Business Conduct Beyond Puffery Digital technologies allow companies to create more effective codes of business conduct. Why Artificial Intelligence Isn't a Sure Thing to Increase Productivity As companies adopt artificial intelligence to increase efficiency, are their employees skilled enough to use those technologies effectively?


Robots as Actors in a Film: No War, A Robot Story

arXiv.org Artificial Intelligence

Robots as Actors in a Film: No War, A Robot Story Andreagiovanni Reina, Viktor Ioannou, Junjin Chen, Lu Lu, Charles Kent, James A. R. Marshall Abstract -- Will the Third World War be fought by robots? This short film is a lighthearted comedy that aims to trigger an interesting discussion and reflexion on the terrifying killer-robot stories that increasingly fill us with dread when we read the news headlines. The fictional scenario takes inspiration from current scientific research and describes a future where robots are asked by humans to join the war . Robots are divided, sparking protests in robot society... will robots join the conflict or will they refuse to be employed in human warfare? Food for thought for engineers, roboticists and anyone imagining what the upcoming robot revolution could look like.


MOBAs and the Future of AI Research

#artificialintelligence

In previous articles, I've looked at a variety of video games that have proven useful test-beds for AI research, with the likes of Ms. Pac-Man, Super Mario Bros. and more recently StarCraft. But in this instance I want to look at a genre that is still relatively new whilst presenting exciting opportunities for AI research: Multiplayer Online Battle Arena's (MOBA). The MOBA genre is undoubtedly one of the most popular in gaming today, but what impact could this have upon AI research? I'm going to provide an overview of MOBA's as a genre, what aspects of their design can prove interesting to AI research and look at some projects that are now bearing fruit both in academia and in corporate research labs. Multiplayer Online Battle Arena's are an offshoot of Real-time Strategy (RTS) games, originating with the Aeon of Strife map for Blizzards StarCraft, followed by the'Defence of the Ancients' mod for WarCraft III: Reign of Chaos and its expansion The Frozen Throne.


Challenges of Human-Aware AI Systems

arXiv.org Artificial Intelligence

From its inception, AI has had a rather ambivalent relationship to humans---swinging between their augmentation and replacement. Now, as AI technologies enter our everyday lives at an ever increasing pace, there is a greater need for AI systems to work synergistically with humans. To do this effectively, AI systems must pay more attention to aspects of intelligence that helped humans work with each other---including social intelligence. I will discuss the research challenges in designing such human-aware AI systems, including modeling the mental states of humans in the loop, recognizing their desires and intentions, providing proactive support, exhibiting explicable behavior, giving cogent explanations on demand, and engendering trust. I will survey the progress made so far on these challenges, and highlight some promising directions. I will also touch on the additional ethical quandaries that such systems pose. I will end by arguing that the quest for human-aware AI systems broadens the scope of AI enterprise, necessitates and facilitates true inter-disciplinary collaborations, and can go a long way towards increasing public acceptance of AI technologies.


Healthcare cybersecurity – the impact of AI, IoT-related threats and recommended approaches

#artificialintelligence

Currently leading healthcare security strategy at Cylera, a biomedical HIoT security startup, Richard Staynings has more than two decades of experience in both cybersecurity leadership and client consulting in healthcare. Last year, he served on the Committee of Inquiry into the SingHealth breach in Singapore as an expert witness. He recently spoke to Healthcare IT News on some of the current developments in healthcare cybersecurity. Q. Artificial intelligence (AI) applications in healthcare are all the rage now, and so are cybersecurity threats, given the frequency and intensity of healthcare-related incidents. In particular, some of the cyberattacks have become more sophisticated through the use of AI to get past cyber defenses.


Northeastern researchers team up with Accenture to offer a road map for artificial intelligence ethics oversight

#artificialintelligence

"We've been doing research on these issues for some time and it became really clear about a year ago that there was a significant need for some kind of committee-based oversight related to data and information ethics," says Sandler. "It was also clear that there wasn't any good guidance on it, and so we thought, well why don't we take what we've learned from other contexts and apply it to this context rather than trying to start from scratch."


A Framework for Responsible Artificial Intelligence

#artificialintelligence

We aren't alone in trying to move responsible AI from discussion to action while the technology is still in its infancy. The European Commission High-Level Expert Group on Artificial Intelligence and Singapore Personal Data Protection Commission also have independent initiatives underway. And the Montreal Declaration for Responsible Development of Artificial Intelligence and various industry-led or regional ethical AI projects are also addressing the issue. These are additional resources for associations willing to use their influence to ignite broad stakeholder adoption. Through conferences and education, associations can offer safe forums for thoughtful debate and practical planning around the fundamental choices we make for responsible AI. Associations stand at a tipping point of AI disruption. Industry and government stakeholders are looking for sensible guideposts for responsible conduct. Will you help define, model, and adopt responsible AI?


Concept-Centric Visual Turing Tests for Method Validation

arXiv.org Machine Learning

Recent advances in machine learning for medical imaging have led to impressive increases in model complexity and overall capabilities. However, the ability to discern the precise information a machine learning method is using to make decisions has lagged behind and it is often unclear how these performances are in fact achieved. Conventional evaluation metrics that reduce method performance to a single number or a curve only provide limited insights. Yet, systems used in clinical practice demand thorough validation that such crude characterizations miss. To this end, we present a framework to evaluate classification methods based on a number of interpretable concepts that are crucial for a clinical task. Our approach is inspired by the Turing Test concept and how to devise a test that adaptively questions a method for its ability to interpret medical images. To do this, we make use of a Twenty Questions paradigm whereby we use a probabilistic model to characterize the method's capacity to grasp task-specific concepts, and we introduce a strategy to sequentially query the method according to its previous answers. The results show that the probabilistic model is able to expose both the dataset's and the method's biases, and can be used to reduced the number of queries needed for confident performance evaluation.


The Role of Cooperation in Responsible AI Development

arXiv.org Artificial Intelligence

In this paper, we argue that competitive pressures could incentivize AI companies to underinvest in ensuring their systems are safe, secure, and have a positive social impact. Ensuring that AI systems are developed responsibly may therefore require preventing and solving collective action problems between companies. We note that there are several key factors that improve the prospects for cooperation in collective action problems. We use this to identify strategies to improve the prospects for industry cooperation on the responsible development of AI.


Artificial Intelligence Governance and Ethics: Global Perspectives

arXiv.org Artificial Intelligence

Artificial intelligence (AI) is a technology which is increasingly being utilised in society and the economy worldwide, and its implementation is planned to become more prevalent in coming years. AI is increasingly being embedded in our lives, supplementing our pervasive use of digital technologies. But this is being accompanied by disquiet over problematic and dangerous implementations of AI, or indeed, even AI itself deciding to do dangerous and problematic actions, especially in fields such as the military, medicine and criminal justice. These developments have led to concerns about whether and how AI systems adhere, and will adhere to ethical standards. These concerns have stimulated a global conversation on AI ethics, and have resulted in various actors from different countries and sectors issuing ethics and governance initiatives and guidelines for AI. Such developments form the basis for our research in this report, combining our international and interdisciplinary expertise to give an insight into what is happening in Australia, China, Europe, India and the US.