The US Food and Drug Administration has given marketing clearance to CognICA, an artificial intelligence–powered integrated cognitive assessment for the early detection of dementia. Developed by Cognetivity Neurosciences Ltd, CognICA is a 5-minute, computerized cognitive assessment that is completed using an iPad. The test offers several advantages over traditional pen-and-paper-based cognitive tests, the company said in a news release. "These include its high sensitivity to early-stage cognitive impairment, avoidance of cultural or educational bias and absence of learning effect upon repeat testing," the company notes. Because the test runs on a computer, it can support remote, self-administered testing at scale and is geared toward seamless integration with existing electronic health record systems, they add.
Washington – A senior al-Qaida leader was killed in a U.S. drone strike in Syria, the Pentagon said Friday. The strike comes two days after a base in southern Syria, used by the U.S.-led coalition fighting the Islamic State group, was assaulted. "A U.S. airstrike today in northwest Syria killed senior al-Qaida leader Abdul Hamid al-Matar," said Central Command spokesman Maj. There were no known casualties from the strike, he said, adding it was conducted using an MQ-9 aircraft. "The removal of this al-Qaida senior leader will disrupt the terrorist organization's ability to further plot and carry out global attacks," he said.
The US military has killed senior al Qaeda leader Abdul Hamid al-Matar in a drone strike in Syria, a US Central Command spokesman said. "The removal of this al Qaeda senior leader will disrupt the terrorist organisation's ability to further plot and carry out global attacks threatening US citizens, our partners, and innocent civilians," US Army Major John Rigsbee said in a written statement late on Friday. The strike comes two days after a US outpost in southern Syria was attacked. Rigsbee did not say if the US drone strike was carried out in retaliation of the attack.
Two experiences of how AI developers within the federal government are pursuing AI accountability practices were outlined at the AI World Government event held virtually and in-person this week in Alexandria, Va. Taka Ariga, chief data scientist and director at the US Government Accountability Office, described an AI accountability framework he uses within his agency and plans to make available to others. And Bryce Goodman, chief strategist for AI and machine learning at the Defense Innovation Unit (DIU), a unit of the Department of Defense founded to help the US military make faster use of emerging commercial technologies, described work in his unit to apply principles of AI development to terminology that an engineer can apply. Ariga, the first chief data scientist appointed to the US Government Accountability Office and director of the GAO's Innovation Lab, discussed an AI Accountability Framework he helped to develop by convening a forum of experts in the government, industry, nonprofits, as well as federal inspector general officials and AI experts. "We are adopting an auditor's perspective on the AI accountability framework," Ariga said.
He pointed to China's largest genomics company, BGI, which purchased the U.S. firm Complete Genomics in 2013. Over the years, BGI has made inroads in American hospitals and health-care institutions, offering inexpensive large-scale DNA sequencing, he said. Providing such services is not illegal, but at the same time, You said, BGI is gaining access to massive amounts of Americans' genetic data.
The North Atlantic Treaty Organization (NATO), the military alliance of 30 countries that border the North Atlantic Ocean, this week announced that it would adopt its first AI strategy and launch a "future-proofing" fund with the goal of investing around $1 billion. Speaking at a news conference, Secretary-General Jens Stoltenberg said that the effort was in response to "authoritarian regimes racing to develop new technologies." NATO's AI strategy will cover areas including data analysis, imagery, cyberdefense, he added. NATO said in a July press release that it was "currently finalizing" its strategy on AI" and that principles of responsible use of AI in defense will be "at the core" of the strategy. Speaking to Politico in March, NATO assistant secretary general for emerging security challenges David van Weel said that the strategy would identify ways to operate AI systems ethically, pinpoint military applications for the technology, and provide a "platform for allies to test their AI to see whether it's up to NATO standards."
Nelson left HuffPost at the end of 2018 to work full time on the concept for the game. While at HuffPost, Nelson wrote a newsletter with a humorist spin on the day's political headlines. But Nelson says the news cycle can often feel like a "SportsCenter" highlight reel, continually cycling through provocative tweets and sound bites from politicians. And, because of that, Nelson said the average reader may think American politics are nothing more than a "Twitter-fueled boxing match, where occasionally there's a Supreme Court nominee or a major bill."
United States officials issued new warnings Friday about China's ambitions in artificial intelligence and a range of advanced technologies that could eventually give Beijing a decisive military edge and possible dominance over healthcare and other essential sectors in the US. The warnings include a renewed effort to inform business executives, academics and local and state government officials about the risks of accepting Chinese investment or expertise in key industries, officials at the National Counterintelligence and Security Center said. While the centre does not intend to tell officials to reject Chinese investment, it will encourage efforts to control intellectual property and implement security measures. National security agencies under President Joe Biden's administration are making an aggressive public push against China, which some officials have called the greatest strategic threat to the US. The Biden administration has simultaneously tried to ease some tensions with Beijing that date from the administration of former US President Donald Trump and seek common ground on trade and climate change.
This week, the Biden administration confirmed a Reuters report that it plans to appoint Missy Cummings, an engineering professor at Duke University and a former fighter pilot, as the senior adviser for safety at the National Highway Traffic Safety Administration. The head of Duke's Humans and Autonomy Lab, Cummings is an expert in human factors, a field examining interactions between people and machines. That's an important skill set for the development of advanced driver-assistance systems, or ADAS, emerging automotive technologies that rely on safe handoffs between car and driver on the roadway. It's the crucial bridge between the mostly dumb vehicles we drive today and self-driving cars. Cummings has studied ADAS for years, and she has been a vocal critic of Tesla's deployment of its Autopilot feature, which enables a vehicle to moderate speed, make turns, and respond to traffic signals on its own (though, contra the feature's name, the driver must remain vigilant and ready to intervene).
The Air Force Research Laboratory, in partnership with United Kingdom's Defence Science and Technology Laboratory (Dstl), have demonstrated for the first time the ability for the U.S. and the U.K. to jointly develop, select, train and deploy state-of-the-art machine learning algorithms in support of the armed forces of each of the two nations. This research is designed to support adjacent, collaborating U.S. and U.K. brigades with enduring wide-area situational awareness, which aims to improve decision-making, increase operational tempo, reduce risk to life and reduce manpower burden. The in-person, virtual demonstration was hosted jointly at AFRL's Information Directorate in Rome and Dstl at its site near Salisbury, U.K., Oct. 18. The demonstration highlighted integrated AI technologies across the two nations, showcasing the ability to share data and algorithms through a common development and deployment platform to enable the rapid selection, testing and deployment of AI capabilities. The event was made possible by a U.K. and U.S. partnership agreement concerning autonomy and AI collaboration established in December 2020.