Goto

Collaborating Authors

 societal


How to evolve artificial intelligence models alongside societal needs - MedCity News

#artificialintelligence

We live in a time when generation-shaping events and emerging technology have converged at a rapid pace – influencing the way our society communicates and interacts with one another. Nowhere is this more evident than with the rise of digital therapeutics in the field of mental health, as new care models and apps are developed for the treatment of physical and behavioral health conditions such as pain, sleep, anxiety and depression. While it is often presumed that the act of bonding is exclusive to human therapeutic relationships, recent studies have shown that digital therapeutic tools are in fact capable of establishing a comparable therapeutic bond with users. But in the same way that relationships must be nurtured between people, so must the connection between virtual mental health services and their users. As society becomes more open to and reliant on these tools, it is the responsibility of technology companies, especially those tasked with assisting people's mental health, to build and maintain artificial intelligence (AI) and machine learning (ML) models that adapt alongside societal needs.


Needs-aware Artificial Intelligence: AI that 'serves [human] needs'

Watkins, Ryan, Human, Soheil

arXiv.org Artificial Intelligence

Many boundaries are, and will continue to, shape the future of Artificial Intelligence (AI). We push on these boundaries in order to make progress, but they are both pliable and resilient--always creating new boundaries of what AI can (or should) achieve. Among these are technical boundaries (such as processing capacity), psychological boundaries (such as human trust in AI systems), ethical boundaries (such as with AI weapons), and conceptual boundaries (such as the AI people can imagine). It is within this final category while it can play a fundamental role in all other boundaries} that we find the construct of needs and the limitations that our current concept of need places on the future AI.


Study finds that few major AI research papers consider negative impacts

#artificialintelligence

In recent decades, AI has become a pervasive technology, affecting companies across industries and throughout the world. These innovations arise from research, and the research objectives in the AI field are influenced by many factors. Together, these factors shape patterns in what the research accomplishes, as well as who benefits from it -- and who doesn't. In an effort to document the factors influencing AI research, researchers at Stanford, the University of California, Berkeley, the University of Washington, and University College Dublin & Lero surveyed 100 highly cited studies submitted to two prominent AI conferences, NeurIPS and ICML. They claim that in the papers they analyzed, which were published in 2008, 2009, 2018, and 2019, the dominant values were operationalized in ways that centralize power, disproportionally benefiting corporations while neglecting society's least advantaged.


The Impact of Artificial Intelligence (AI) - Societal, Organisational, Personal

#artificialintelligence

Organised by The Artificial Research Centre, Brunel University London, in association with the British Academy of Management (E-Business / Government, Organisational Transformation, Change and Development and Strategy SIGs) The next generations of technological development driven by Artificial Intelligence (AI) are unlike anything we have seen before. Data is the fuel used to drive the development in the Big Data era. Business leaders, policy makers and the public are only just the beginning to grasp the unquenchable thirst algorithms have for data. Many human activities are already being tracked and traced using smart sensors, apps, mobile devices and wearable tech. As things we come into contact with become part of the internet of things, so our every move will generate more data about us, our behaviours, habits, preferences and displeasures.


Keys to a sustainable future

#artificialintelligence

Energy Star was launched in 1992 by the US Environmental Protection Agency as a voluntary labelling programme recognising the value of energy-efficiency in a broad range of computer-related products, from personal computers to air-conditioning systems. The programme's major success was the widespread adoption of the energy-saving "sleep mode" in consumer electronic devices. Energy Star's innovative breakthrough represents an important platform from which today's concept of computational sustainability was launched. Computational sustainability is defined as a field of interdisciplinary research that attempts to optimise societal, economic and environmental resources using advanced decision-making algorithms supported by the ever-increasing processing power of today's evolving computer systems. Computational sustainability's key goals include the development of computational models, methods and tools to assist in the management of the delicate balance between environmental, economic and societal needs. Advancements in AI and HCI have enabled combinations of robots and humans to carry out critical functions in the most hostile of environments.