Collaborating Authors


Smarter health: How AI is transforming health care


CHAKRABARTI: Episode one, the digital caduceus. In the not so distant future, artificial intelligence and machine learning technologies could …

ASI RRobot #48 Peter Blake - Artificial Super Intelligence Robot Club


Each and every one of them is unique. The time has come and the technological singularity has come true, and Artificial Super Intelligence was born. Then, using blockchain technology, the ASI has divided itself into a group of independent self-aware personalities of a certain architecture and different rights to access protected information. This is how the ASIRC was formed.

AI and machine learning are improving weather forecasts, but they won't replace human experts


A century ago, English mathematician Lewis Fry Richardson proposed a startling idea for that time: constructing a systematic process based on math for predicting the weather. In his 1922 book, "Weather Prediction By Numerical Process," Richardson tried to write an equation that he could use to solve the dynamics of the atmosphere based on hand calculations. It didn't work because not enough was known about the science of the atmosphere at that time. "Perhaps some day in the dim future it will be possible to advance the computations faster than the weather advances and at a cost less than the saving to mankind due to the information gained. But that is a dream," Richardson concluded.

La veille de la cybersécurité


The Pentagon has tapped artificial intelligence ethics and research expert Diane Staheli to lead the Responsible AI (RAI) Division of its new Chief Digital and AI Office (CDAO), FedScoop confirmed on Tuesday. In this role, Staheli will help steer the Defense Department's development and application of policies, practices, standards and metrics for buying and building AI that is trustworthy and accountable. She enters the position nearly nine months after DOD's first AI ethics lead exited the Joint Artificial Intelligence Center (JAIC), and in the midst of a broad restructuring of the Pentagon's main AI-associated components under the CDAO. "[Staheli] has significant experience in military-oriented research and development environments, and is a contributing member of the Office of the Director of National Intelligence AI Assurance working group," Sarah Flaherty, CDAO's public affairs officer, told FedScoop.

Visual Turing Test


On a rainy evening during the Easter holidays, I figured I could spice up the dinner conversation by talking about an exciting AI development with my family. As a younger brother, the heavy burden of being "the tech guy" fell on my shoulders -- therefore the rest of my family are not the most technical, with my sister being an artist both at heart and as a profession. The algorithm I had enthusiastically talked about was the newly released Dall-e 2 model that could produce visual art comparable to human art. The conversation went deep into what constitutes art, and what makes us'special' and a claim was made that the depth of emotion that a human artist conveys would be lacking from the AI-generated designs and therefore could be discerned with 100% certainty. You may guess my instant reaction to the last claim: it was time to conduct a test!

BMW, Pasqal Apply Quantum Computing to Car Design, Manufacturing - EE Times Europe


Car manufacturer BMW and quantum computing technology developer Pasqal have entered a new phase of collaboration to analyze the applicability of quantum computational algorithms to metal forming applications modeling. The automotive industry is one of the most demanding industrial environments, and quantum computing could solve some of the key design and manufacturing issues. According to a report by McKinsey, automotive will be one of the primary value pools for quantum computing, with a high impact noticeable by about 2025. The consulting firm also expects a significant economic impact of related technologies for the automotive industry, estimated at $2 billion to $3 billion, by 2030. Volkswagen Group led the way with the launch of a dedicated quantum computing research team back in 2016.

Preparing the Global Workforce for AI Disruption


Within the next decade, the world will see a major disruption of the workforce due to advances in artificial intelligence (AI) technology. According to a McKinsey Global Institute report, 375 million workers, or about 14 percent of the global workforce, may be required to shift occupations as digitization, automation, and AI technologies start to take over the workspace. In a separate 2018 report by the Organization for Economic Cooperation and Development (OECD), half of the global workforce is expected to be impacted one way or another by machine-learning technologies. AI technology will be at the forefront of the Fourth Industrial Revolution, and it will prove to be a far greater challenge than the ones that preceded it. If the world does not prepare, robots and technology could cause mass unemployment.

Ethics in Robotics and Artificial Intelligence


As robots are becoming increasingly intelligent and autonomous, from self-driving cars to assistive robots for vulnerable populations, important ethical questions inevitably emerge wherever and whenever such robots interact with humans and thereby impact human well-being. Questions that must be answered include whether such robots should be deployed in human societies in fairly unconstrained environments and what kinds of provisions are needed in robotic control systems to ensure that autonomous machines will not cause humans harms or at least minimize harm when it cannot be avoided. The goal of this specialty is to provide the first interdisciplinary forum for philosophers, psychologists, legal experts, AI researchers and roboticists to disseminate their work specifically targeting the ethical aspects of autonomous intelligent robots. Note that the conjunction of "AI and robotics" here indicates the journal's intended focus is on the ethics of intelligent autonomous robots, not the ethics of AI in general or the ethics of non-intelligent, non-autonomous machines. Examples of questions that we seek to address in this journal are: -- computational architectures for moral machines -- algorithms for moral reasoning, planning, and decision-making -- formal representations of moral principles in robots -- computational frameworks for robot ethics -- human perceptions and the social impact of moral machines -- legal aspects of developing and disseminating moral machines -- algorithms for learning and applying moral principles -- implications of robotic embodiment/physical presence in social space -- variance of ethical challenges across different contexts of human -robot interaction

Is Artificial Intelligence Made in Humanity's Image? Lessons for an AI Military Education - War on the Rocks


Artificial intelligence is not like us. For all of AI's diverse applications, human intelligence is not at risk of losing its most distinctive characteristics to its artificial creations. Yet, when AI applications are brought to bear on matters of national security, they are often subjected to an anthropomorphizing tendency that inappropriately associates human intellectual abilities with AI-enabled machines. A rigorous AI military education should recognize that this anthropomorphizing is irrational and problematic, reflecting a poor understanding of both human and artificial intelligence. The most effective way to mitigate this anthropomorphic bias is through engagement with the study of human cognition -- cognitive science.