Goto

Collaborating Authors

Results


Human decisions still needed in artificial intelligence for war

#artificialintelligence

US President Joe Biden should not heed the advice of the National Security Commission on Artificial Intelligence (NSCAI) to reject calls for a global ban on autonomous weapons. Instead, Biden should work on an innovative approach to prevent humanity from relinquishing its judgment to algorithms during war. The NSCAI maintains that a global treaty that prohibits the development, deployment and use of artificial intelligence (AI) enabled weapons systems is not in the interests of the United States and would harm international security. It argues that Russia and China are unlikely to follow such a treaty. A global ban, it argues, would increase pressure on law-abiding nations and would enable others to utilise AI military systems in an unsafe and unethical manner.


Can the European Union prevent an artificial intelligence dystopia?

New Scientist

A European Union plan to regulate artificial intelligence could see companies that break proposed rules on mass surveillance and discrimination fined millions of euros. Draft legislation, leaked ahead of its official release later this month, suggests the EU is attempting to find a "third way" on AI regulation, between the free market US and authoritarian China. The draft rules represent an outright ban on AI designed to manipulate people "to their detriment", carry out indiscriminate surveillance or calculate "social scores". Much of the wording is currently vague enough that it could cover the entire advertising industry or nothing at all. In any case, the military and any agency ensuring public security are exempt.


Artificial Intelligence: Regulatory Trends

#artificialintelligence

The potential positive economic effects of artificial intelligence (AI) have been well-documented, with several high-profile studies highlighting its impact on areas such as workforce productivity and wealth creation. At the same time, widespread adoption of AI technologies has contributed to increased scrutiny and a sharper focus on AI's potentially harmful implications. Listed below are the key regulatory trends impacting the AI theme, as identified by GlobalData. In 2020, the US and Europe have taken steps to regulate AI, but there are notable differences in approach. Europe appears more optimistic about the benefits of regulation, while the US has warned of the dangers of overregulation.


Thousands of US government agencies are using Clearview AI without approval

Engadget

Nearly two thousand government bodies, including police departments and public schools, have been using Clearview AI without oversight. Buzzfeed News reports that employees from 1,803 public bodies used the controversial facial-recognition platform without authorization from bosses. Reporters contacted a number of agency heads, many of which said they were unaware their employees were accessing the system. A database of searches, outlining which agencies were able to access the platform, and how many queries were made, was leaked to Buzzfeed by an anonymous source. It has published a version of the database online, enabling you to examine how many times each department has used the tool.


The CPSC Digs In On Artificial Intelligence - Consumer Protection - United States

#artificialintelligence

American households are increasingly connected internally through the use of artificially intelligent appliances.1 But who regulates the safety of those dishwashers, microwaves, refrigerators, and vacuums powered by artificial intelligence (AI)? On March 2, 2021, at a virtual forum attended by stakeholders across the entire industry, the Consumer Product Safety Commission (CPSC) reminded us all that it has the last say on regulating AI and machine learning consumer product safety. The CPSC is an independent agency comprised of five commissioners who are nominated by the president and confirmed by the Senate to serve staggered seven-year terms. With the Biden administration's shift away from the deregulation agenda of the prior administration and three potential opportunities to staff the commission, consumer product manufacturers, distributors, and retailers should expect increased scrutiny and enforcement.2


US Military Seeks to Speed AI Adoption for Support Systems - AI Trends

#artificialintelligence

The US military needs to scale up its use of AI or be left behind by adversaries, Lt. Gen. Michael Groen, chief of the Pentagon's Joint AI Center (JAIC), told a recent conference of the National Defense Industrial Association, according to a report from UPI. While current military use of AI "is a step in the right direction, we need to start building on it," stated Groen, who was appointed head of the JAIC in October. He is the second director of JAIC, or "the jake" in Pentagon parlance, which was set up by Congress in 2018. The first director was Air Force Lt. Gen. John N.T. "Jack" Shanahan, who retired last year. Noting that China has said it intends "to be dominant in AI by 2030," the Pentagon has focused on a five-year program culminating in 2027.


The CPSC Digs In on Artificial Intelligence

#artificialintelligence

American households are increasingly connected internally through the use of artificially intelligent appliances.1 But who regulates the safety of those dishwashers, microwaves, refrigerators, and vacuums powered by artificial intelligence (AI)? On March 2, 2021, at a virtual forum attended by stakeholders across the entire industry, the Consumer Product Safety Commission (CPSC) reminded us all that it has the last say on regulating AI and machine learning consumer product safety. The CPSC is an independent agency comprised of five commissioners who are nominated by the president and confirmed by the Senate to serve staggered seven-year terms. With the Biden administration's shift away from the deregulation agenda of the prior administration and three potential opportunities to staff the commission, consumer product manufacturers, distributors, and retailers should expect increased scrutiny and enforcement.2


Biden's New Deal and the Future of Human Capital

The New Yorker

No one in Washington seems to know what the story is, or even where to set the dateline. Is it the culture war over masks, in the Florida sunshine? Is it the crisis along the southern border? CNN's prime-time viewership is down thirty-seven per cent, MSNBC's numbers are not much better, and even Fox's are in decline. The morning political-newsletter writers, and many of the rest of us, have been reduced to replaying the dramas of the Trump Administration (Why is John Boehner backing an Ohio congressman whom Trump opposes?) or even the Obama years (How much hold does Larry Summers have on the Democratic Party?). For a moment this week the story was whether one of the Bidens' German shepherds, Major, has a biting problem.


Do Not Be Alarmed by Wild Predictions of Robots Taking Everyone's Jobs

Slate

In February, McKinsey Global Institute predicted that 45 million Americans--one-quarter of the workforce--would lose their jobs to automation by 2030. That was up from its 2017 estimate that 39 million would be automated out of work, due to the economic dislocation of COVID-19. Historically, firms tend to replace some of the workers they fire during recessions with machines. Fear of robot-driven mass unemployment has become increasingly mainstream. Andrew Yang, who is currently leading the polls for the Democratic nomination to be the next mayor of New York City, made it a pillar of his unorthodox 2020 presidential campaign.


Artificial Intelligence and Human Trust in Healthcare: Focus on Clinicians

#artificialintelligence

Artificial intelligence (AI) can transform health care practices with its increasing ability to translate the uncertainty and complexity in data into actionable—though imperfect—clinical decisions or suggestions. In the evolving relationship between humans and AI, trust is the one mechanism that shapes clinicians’ use and adoption of AI. Trust is a psychological mechanism to deal with the uncertainty between what is known and unknown. Several research studies have highlighted the need for improving AI-based systems and enhancing their capabilities to help clinicians. However, assessing the magnitude and impact of human trust on AI technology demands substantial attention. Will a clinician trust an AI-based system? What are the factors that influence human trust in AI? Can trust in AI be optimized to improve decision-making processes? In this paper, we focus on clinicians as the primary users of AI systems in health care and present factors shaping trust between clinicians and AI. We highlight critical challenges related to trust that should be considered during the development of any AI system for clinical use.