Politics are in the air, like that ominous reddish glow suffocating much of the West in recent weeks on account of all those tragic wild fires. This coming week we get our first presidential debate. A chance for Donald Trump and Joe Biden to shake hands and have a respectful, reasoned exchange of views on the future of the unfairly maligned Section 230 of the Communications Decency Act; the need to reform the Stored Communications Act; the wisdom of replicating Europe's General Data Privacy Regulation; the merits of taking antitrust action against Google for its manipulation of search results or against Amazon for its treatment of third-party sellers on its platform. Maybe we will even see the candidates reflect humbly on humanity's place in the universe, in light of the breaking news from Venus. The debate will probably be all tense, no future--maybe not as heated as a debate between 2016 Lindsey Graham and 2020 Lindsey Graham, but close.
U.S. Chief Technology Officer Michael Kratsios and Energy Secretary Dan Brouillette shed a little light on how the Energy Department and Trump administration are thinking about ethics, regulatory approaches, and broader societal implications as they push the rollout of artificial intelligence and other emerging technologies. During a fireside chat in Pittsburgh Tuesday, Brouillette reflected on similar-but-as-serious considerations previously made when the agency was developing nuclear technologies many years ago. He noted that now, when focusing on ethics, his mind tends to hone in on negative aspects and "bad results" that could arise with tech adoption. "I haven't thought this through with great depth, but there seems to be some positive aspects of AI, too, on the ethics front that we need to explore," Brouillette told the chat's moderator Carnegie Mellon University Vice President of Research Michael McQuade. "And perhaps through that process we can speed the adoption of some of these technologies," he said, adding that he'd like to give it all more thought.
In February of this year, the Department of Defense (DoD) issued five Ethical Principles for Artificial Intelligence (AI): Responsible, Equitable, Traceable, Reliable and Governable. The DoD principles build off recommendations from 2019 by the Defense Innovation Board and the interim report of the National Security Commission on AI (NSCAI). The defense industry and others in the private sector have also been considering ethical issues regarding AI, including the issue of whether businesses should have an AI code of ethics. When cyber first became an issue about 22-years ago, the trend was to raise awareness and think through the consequences. Similarly, now we are developing awareness of the issues and beginning to think through the consequences of AI.
The following declaration was released by the Governments of the United States of America and the United Kingdom of Great Britain and Northern Ireland during the September 25 inaugural meeting of the Special Relationship Economic Working Group. We intend to establish a bilateral government-to-government dialogue on the areas identified in this vision and explore an AI R&D ecosystem that promotes the mutual wellbeing, prosperity, and security of present and future generations. Signed in London and Washington on September 25, 2020, in two originals, in the English language.
Be prepared in the near future when you gaze into the blue skies to perceive a whole series of strange-looking things – no, they will not be birds, nor planes, or even superman. They may be temporarily, and in some cases startlingly mistaken as UFOs, given their bizarre and ominous appearance. But, in due course, they will become recognized as valuable objects of a new era of human-made flying machines, intended to serve a broad range of missions and objectives. Many such applications are already incorporated and well entrenched in serving essential functions for extending capabilities in our vital infrastructures such as transportation, utilities, the electric grid, agriculture, emergency services, and many others. Rapidly advancing technologies have made possible the dramatic capabilities of unmanned aerial vehicles (UAV/drones) to uniquely perform various functions that were inconceivable a mere few years ago.
The US military is testing a smart watch and ring system capable of detecting illnesses two days before the wearer develops symptoms. Called Rapid Analysis of Threat Exposure (RATE), the project is using Garmin and Oura devices that have been program with artificial intelligence trained on nearly 250,000 coronavirus cases and other sicknesses. The system notifies the user of an oncoming illness using a scale from one to 100 on how likely it will happen over the next 48 hours. Military officials note that'Within two weeks of us going live we had our first successful COVID-19 detect.' The US military is testing a smart watch and ring system capable of detecting illness two days before the wearer develops symptoms.
Today, the Trump Administration announced that the United States and the United Kingdom signed a Declaration on Cooperation in Artificial Intelligence Research and Development. Through this historic R&D cooperation agreement, we will work together to drive technological breakthroughs, promote researcher collaboration, and advance the development of trustworthy AI. Today's announcement is an outcome of the U.S. – UK Special Relationship Economic Working Group, which was established following a meeting between President Donald J. Trump and Prime Minister Boris Johnson last year. "America and our allies must lead the world in shaping the development of cutting edge AI technologies and protecting against authoritarianism and repression. We are proud to join our special partner and ally, the United Kingdom, to advance AI innovation for the well-being of our citizens, in line with shared democratic values," said Michael Kratsios, U.S. Chief Technology Officer.
The Trump Administration announced today that the United States and the United Kingdom signed a Declaration on Cooperation in Artificial Intelligence Research and Development intended "to drive technological breakthroughs, promote researcher collaboration and advance the development of trustworthy AI." The announcement is an outcome of the U.S.–UK Special Relationship Economic Working Group, established following a meeting between President Donald J. Trump and Prime Minister Boris Johnson last year. "America and our allies must lead the world in shaping the development of cutting edge AI technologies and protecting against authoritarianism and repression," said Michael Kratsios, U.S. Chief Technology Officer. "We are proud to join our special partner and ally, the United Kingdom, to advance AI innovation for the well-being of our citizens, in line with shared democratic values." The Administration said the agreement will build "upon previous action by the United States to engage with likeminded international partners to accelerate the development of trustworthy AI innovation."
Researchers at the University of California at Riverside are working to teach computer vision systems what objects typically exist in close proximity to one another so that if one is altered, the system can flag it, potentially thwarting malicious interference with artificial intelligence systems. The yearlong project, supported by a nearly $1 million grant from the Defense Advanced Research Projects Agency, aims to understand how hackers target machine-vision systems with adversarial AI attacks. Led by Amit Roy-Chowdhury, an electrical and computer engineering professor at the school's Marlan and Rosemary Bourns College of Engineering, the project is part of the Machine Vision Disruption program within DARPA's AI Explorations program. Adversarial AI attacks – which attempt to fool machine learning models by supplying deceptive input -- are gaining attention. "Adversarial attacks can destabilize AI technologies, rendering them less safe, predictable, or reliable," Carnegie Mellon University Professor David Danks wrote in IEEE's Spectrum in February.
Researchers from Florida Atlantic University's College of Engineering and Computer Science have received a four-year, $1 million grant from the National Science Foundation for a project to make the master's degree in artificial intelligence (AI) accessible to high-achieving, low-income students. The accelerated five-year bachelor's degree in science and master's degree in AI program is designed to adapt curricular and co-curricular support to enable students to complete their degrees in AI, autonomous systems or machine learning, which are critically important areas needed to advance America's global competitiveness and national security. "Artificial intelligence is transforming every walk of life from business to healthcare and enabling us to rethink how we analyze data, integrate massive amounts of information and make informed decisions that impact society, the economy and governance," said Stella Batalama, Ph.D., dean of FAU's College of Engineering and Computer Science and a co-principal investigator of the grant. "This important grant from the National Science Foundation will allow us to recruit and train talented and diverse students who are economically disadvantaged and provide them with a unique opportunity to pursue graduate education in an exciting and burgeoning field." By preparing increased numbers of high-achieving, low-income students to become engineers in these fields, this project addresses the need for growing a more diverse STEM (science, technology, engineering and mathematics) research population.